var/home/core/zuul-output/0000755000175000017500000000000015135065550014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135076721015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000314623015135076603020265 0ustar corecore}tikubelet.log_o[;r)Br'o tE>-Ɛ:Ȓ!ɹ>vl]dsdr.k9GfH 0~FYI_?|ӎ6n#oVݏKf+ovpZl!Kޒ/h3_.gSeq5v(×_~^ǿr]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguݬvGR)$DD D~m{]iX߽\|U& $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*XCF*J(-aD~J#'`&R7߾YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' $ޠYx3JtQJFjc 9G8MOY:GTMce0hTYF;B6@ c$Ⱦ֠N+fD>%vz_. o~I|3j }ɌDSd1d9nTwF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`{4|U?y+~7>d,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4;'=k 9ԻreFj?wQ+KmrI,״+i̸.#v0nFNV-y(&e'd,LFlPh ۬r7%׿kb8_b|r wFuRI%T۩Ѭza\_/2vw>- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pm:m_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLy &הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=HN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Iͤ3*@<0 %U&%CA%vrr;t#H+B|ɧJiM cm)_$Lԏ6Ӿ S]/&:,,$.Ɏ\`UXSyZj 6&:$)w8щc.֐ovE^lKKiw+ڍ[I?TPht_ '^g/5n]FhNUnۂ6C9C7sn,kje*;iΓG^ۃn󨔖I@[ tWv Fyw9J֥WmN^<.eܢMρ'JÖŢո%gQ=p2YaI"&ư%wm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvE=-dZB4']a.QO:#'6RE'E3 */HAYk%C6Θ%|5u=kkN2;{#ޢ1c qw zǽ0 2mK:ȔsGdurWMF*֢v^t+`F P |ɧ<Ғ8_iqEGPVC P2EU:F4!ʢlQHZ9E CBU)Y(`~)j 1駭~բ>XiNs~EQ(S +mpN, Mq 70eP/d3h-8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$+mj(^>c/"ɭex^k$# $V :]PGszyPE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TEYʿlP1ÒJG9TV\/B{MӨ&Ę4.s"x| 6گ4n;[4E8#yrH9٧=v֍8. ZKߴ/IJ>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { ޹na4p9/B@Dvܫs;/f֚Znϻ-mmBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+ػ޶%WK(l^Xd' h׼ٚ緊dٱݲ+98bWtWeBڴbQ4Qe'[Jd_&:H6E)}UlN-9KaW(J ޑE^D}*mߔc i쫢2r ro#R[Ũx1\7gIr< '}U 7-{Q"ZeJ4y ]oՌq|²鲒Z*6A:ޝN .4uuU!S`.OBN c8^2=frIB=J_>Σ>9}@48/NI闷OIيсNI S7XK}#0Kg2Z{ 7dt_WiXt7,mH+d*CÁMa"c0LPs0 @|Ikm7YQMʫONi흩ywr[V|$V$O^BR kgHÛz j N!|W .!T:8 ks4{gx8fIX9V:X>J F崎'8&oIEhʧkIa56z6> 5sz_du~CRL v:A[u[f?ib,h_YrW Nxx .xe9 @lj4 ԆO"Є7vGHxA@8><N4ϦX0XTǑeN0È'wFy}gh+w~@O{vfM0yyŗH (189+ˊQQՠל~[(Mo"ڻ XA&MQB=de>G>h`H4SmE saU\kIVt7 Bxx,lzr]<[q9)Sڹo(4oG[K6 6* #f5?3/8ys ݃O?~ȒPZeW/+F MB.y! 9}( >|Y >jP}!` @hhPvMIrÈmVmb>+c}둦q!I8<H8ήăALA7eեWЩ$Rnn涀p\¯u l"]tK};<%yؖݖoO4> D@hg@mh)+\uKya}c=쇶n> nQ&()5ym.5Rղ2AImO2]8Qܦ9OoxUA j!l Js\Lߊ8CgY[m§"fE b&˃Y-hmi]=$Vl[߆*)kRX쑐!J{ UT[ q$Ԟ%UؖzụʵkoI7Lp]H*xZF{̔[T=bw3JǮJ`"/7^_e'ǦoZi-Y!ROtBczHXaTC]haêʖ;Br$w@>eݡ PLQ,&R3̟iuc#z5Ɯ Ӂ#'I{a>8po  ;!2XÛLg541Odݚ: ݊M̨;vS 2CXd o(,̢1(كu ' ,slgeЮba`({[7ib@kB"K,gMܑ2Cha:=pǼ< <^6èhP +0K 6y#T]Ϣ4~f$|L9zsiadz)yVѠlY?̡,zvt-ٟxB!GP&m^cCXȜ? w-m[ bw%q'[ΣF׎Hf#ZR5!5dW1hD9<ᦴJOwc#J^c,=[uEdi_Ź\řMSWUQDurO:9Ĥnf&6 cSOd\^"#_)lݪL W:= ֪U:5o85 ./ҸJ1PWOZ5 \3H?"<6QN1ZWtСIt+z'4T-OUE޾k?>ioG9٠&@3U6B} #-Y@R$"Gih emqE(~%7;B c8 έ:#/&okG C hl x} v@@aw$0٪8 '&R=1ߏ1ьnCx} 6ؒc0@(K[eߩ[b P z+D!r 8΋ Mba;= F,-aɃ=Ȅ8 A} UCPA?P~["G4Aҁ_"~+!>$^B wYAmڗ8!. Q}PYնیP(2;Le'N\ק`1_.$-`^VƼLm遙Vۮc)@폱" D-, {QMulQ_Xġo۳d2{6^E4j}t8wkpv. AFŶȃ/ifuܞd(N ~OIJmW(Is "n՘NjKP 0EmNb~).-z} a;vO| LC+I-e|ms7¶]_ } bKn#vomƸAmY闧4^:%,20=kv@澸)^9޺6ok`^B ,r5pXNK&sF }Q8¾ɺ{:~or 0v ^"UJȆրۯJ]& 9WkÓ"U)7$IY쨝dΫ3(0\+`gjԥ}fQUKIpΒD5a8sw!E,2yW7FfcICLO*řXx& yE&UyIU\xRf $,&p=m/Ӟn?atV$;i@G;+cMJ^WU `ڵ5h!__*_=Y Xbqټay7"I\%,nC)6ͩA(9h]: geבǹ7>A\VDR8,y2#N0wzC/Fy~g:K-2-ρKJ֜y|r Yu+J*" tș$=aa8elG{i-Yf$&G:0=rC q).XT~Z Ƞ&yD^C[oP ͓ӃWpH3=}/A¼x N |82(ͮFWi&Ca=c lYl}`ñyA:c…PI6㮛-|2"]"wHẇA/_\Fx|0E""\|B0_&5 "4t[jL54d_|}3 VZM+_Vև wGTT# G4\645ua?JW޹ߤ`l/N=?>:a<kC3ET=e3Y̡Y"K DEY`I47{i^Q$Y9YNG[%.YED]*lz*F ީ˗ua3p6u@Wڪ3xzW2[ u98fi60[(Um$r(H=a䒈gQlpc*H?Y4D6YW@Y$EN䤫>AFAz}q+ܡU'.@L 7`[7K ɯU6(t T )+2U@UE1vZ$m .݊T &m/?aoM-K(݀P9iGJ7 u e6'=PHB:]B u6'y# u6 w 7'?PHB]B u7'}# u7 m@9I~PB FHB5.9f+]аܘEy]P Tw݋;m~Cʇ3C6̦~ 3d_$z_7k$`VuF~͋q4zB@v|)ɨ;Ab?+ʅc#KxvSchB4i[d4d֞k~m,/?IuQhLZA<ƮC-50*!e)Ճ ՚ou>KhP i{}4} ^Erg8NDz[W};(7X8K juk$%a=itjHL) {V4pgD Y^5ޟXBn."綔4)ƦvE_xr̓U6Sxƪ+l8TEΆuŭShUsP==kVnVT,ӻadm_B"̪x:@W pq] `PkxY_S[KĄM5lWYhY".ot!("Ap15< $y $;a^g4/%_w48r'Hu'ȅM#BƼ25>X0`"EOzwT wދ\luQv2",;A*ݫ_=ASe 靄toC51(O_V6n^\4q]̼qػZ{zNg jqAsչ@@"ELb&7t1B3Kf"f^jQ_2p@lWvi;t3ϡ*;qPNռ_>1tZ@K=7Q$!WxS^ 6(hkn[8kœ=E5憵{po)qqRj!Y``ʧ-#xg'!dS/7HOg K^/CܬS-A۸ LPb83 aM |p7L8-*_N^k+yb|Pjk:t˚|Ʃ.rɲwvWd*&trS|(}h"Bo+ꃙ9*d#;tS]#ȹ$vQ&2yxumJE&4g0T s7^O% W[:#7;2j||a.k( m *ĭ;lcS!ՀP'Hwo?%do~+|F;[.-nn#HZYcHŤ6L%xx:s#8f"c8iy]\U2f$1z"@/$chVhR3Hi[A\JN57 znt5'>$G*MH\0달3Rr lx{xę)ӘoVK*ߐ=7AK]0lܳKћגW XӮPW;DjRV"J"JVEf"3MƋ ZʖͨDx @43Kv3-x6gגk-KW(A[e[==ָQpl1AYÃ38_eFDӿ9!m UȈGģНv6CP/vd!Ԓ24.6&>{=g-k+YRY2W( у2GWN_Dd"& b7j9Gt1h _#VQs|O2gJ9;L ˃D6րI7ר,+Xr:hsT8jۑ_zS.qg>ґr$:2]puEAN%N0#Du @#G,eu砸դH~g v$%/%ҝ6/Y$FmS9Τh ZE5\l#;rcPȮ}|=:eV_U MfA)2obd*d¨ߎYw 1؉Áf;`d/̞A;#pLV Ra)tEh\iZz9#O?( nDj&1Q1'9Y :{ !b{ }Hb8dXtJLęץ1SpފF+ %~  P֢.3]BdgI [ ea$Q޿e9U`+o(, UFXlu16Spȱ Zk li5hj8R 9Ji ^Y63`I Y"4T##%|m jlgΤl`5,d' cİW)QB3V [>]`˒贜HF@a~c5 pnZH *d 21aAS.[twO~[o=wWǧ̬q<蒘U "8DY-9yrw=z}@!=GѴs6~c+t0Jic/ ǭ [7Mo͊iM,[cdZ8Jmb ȴ!‰=<)Xn0E#c%Ѳ/(Q#qDH1mf9:1\M+Y;"d`kpykUvx:6DZS@eɂ&R1]+[8HI/ =cISJpLNCpT >݄!S% ^HpSά(kAfʒy 5llcİ':3[GM`}O5VbVx OcuNE88H}{$85@cw}W~"hd81X-]lپwh2sŚTe6'wړ0B;KLs]U񵵆bBM>NXUU_d4ۂTsC~;C/:mNrB8;Y}Fi}F`&o1%"O}4M Hb/>n_-yh0ؑ4 -Ȝը KEk)ea$7Bv܍{XK$śex{G"`Mi:WD˼,=[3nțF ]OG96O1.:%f)ôLZj蚇ow1X}^WDgُBbVV 'PRւi֐ň!\E,LBk,pKbqD֐ׅg\&r^sP' F96KD Tp*'RW 76hYEDHp.7%;7YFey]Y,-&\H't D$8IMBkeAQ=b)Qzq);>b$]UܒǞӡf¡:&j6UdEc$]Z}T|Ym~77 Q#kV. ř׹EP . wվAi߅#ΏkUJR7-ke,6! E0bX_$!gZaBT 9P K 6"ٻܸ=":/A >c.{AG;Vj3ZU;ZdhKU4bİn7;2'ヾځMM3-(;aɎQ U ,,O"­)i11xSOۤ'ۘEOh1OBRzÛ~D :U;km"d,&fK!$&R9`s5#wYHk|"& ٞJp3wgb1s3ɵH \B|nIRt}Mw5<] xdai,A}cL{6ء57!Ar8˄n8*Sv0@e4L +r6ǀ%@H"!`ڳ\I3t)`ZhLlPB u B(;bfVA-oCtK3]YsʇjXY-Hz7V {q{_HpܞWߐ6 V$Ohߘi[ H69.yY(CgKK_jjlɘ5A)rup"j)F 1w{ĢÁR7ovM6 IvOⲟjQ?'(!ip\ 3Qz^灗L}2%]|q\-lRG]l(GmfERUp,4; GoI̖|NȖ,ʽi"|tN$zo $ `'8uI2d3OɹVjx-I+y/=usw6#ΦNn)5sd ƭ",jf➑6(_"[ +#)Kmz?%$$p0>SӤz"l;U g𵄴VIWu% !+`0DaCOS7쯑0Ev4F"K)Τl^ 8ta=R!3FGɥyD-Z=t .$8UX&l^nIp<;KPŤnQ[R'%Y_*kz#1BEejL,>i nY\G ͔Ks:jhBK*;LF#]"YeBk>T3Wsz'Lw i1yQL44#v؏%]%QE; `2h@ΜGSJ.0Y)4lˤIkkq9RB-D$17x+R"$]F{%w)+b&?aΩ ~fa~|Ket 17rEKFwH ̐4_3U͹739DfoԣgEY=|=f Z4zlrʟIAO? }M=؄ۅv"#ϣq?8POv\ ^' O]#ns͵Hps` 7$m7e.鄏 Bza עIpܙ==wƑŏvŀk8ϫ{^׿Mx61o\r -lu 0Q k2GjJڂ2 ik)(d>\JYi^54L{+X҂w@w\x1Hy蕕s F&C E7N̊s*>D-sm߄{ c.&s.GsevqTo?3Ք$!vH]DHd^oIpepá7Uq`Y8z` &X/%oo%{Gcvv+_э [W˧U0ߓ(VN F#pSMid)1l}"P ?w҅REU]'b 4j:bz6GlNlE7ZaNRuHKE`$][3]T1쪠/@3).` 4ynʝ!QE-ϛ0$zJ XsXCxxPUFu:ŇIv#vRaYMZ1:)=f}.o}Gx3W9m5!ΨE؝jƎt=6w$P}| c.i܄Cwšl/N0){?t{$ѫu@[t_/sX0ޕ"cIArIf{GtDv𪄹]Xu_Qy*;0x]#vb0pΈkJ-Yph##w5&>+W&kMpd1u2a]򆽌Hbg?cAs~QTg QT#)/Wqg.j_o}ޟm-p6B04Oһ_&袦P '(Vy²PuaFw?tj=d/ț;9Ly.";?ډ,K}C>?zvC[msekQS)1g9n2G.!4Z*?;7_ÇzXnkGպn'K oel }{ǯt~>>ěUR7~7 l$dQ+ _ ϛݿka_+qo~7>?'ܝpn5Rz?])wmK $܍~?A`yB'rH; ġH"g$,35OUutÈ]}M2vk¸(X.S@) 1 l8?Rϴ2Dz3שzQxlj<@AsWyt4`qW~Ive>mg?',?pu.I[hpZq̬r },%NN&K kx$N$Qm~gf21ćbqWL"c)g1<*x7u'Ѥ+?^Hn ORC|ǿ:z7y~;W( 1Hz n#E~8tR;!߻MN9BߎC>]NQyč21\ UMoI0COmpɮ\8|! ~peR~V-oBL l7ѿx8`sJj A* Aai,|md@('i[MncLAi/Y/OǡR7gu3M/Yp}߄L),`8 ЙGaK:FyI%,p]DN4 U8.칼~k 2Vbz -`roݬPEORKTUxyS9QS~s^ż;gRU@grDAlP$n:Z%q<.\Kq_i^t*BV'YҒWf,UuuKvޟ~FguJ'`wϼ" {`{ 'ȉMzax>vM~`_]EΗȍJf fB+TYJvJ¼}s,Mmӌȹ>oK/Yb /8FnY+Ic"̻Bsf1i{dfx'eAW8pUq[_,7WT7}73٣}_rSUY>;򟫉CoȮ!^}yW'|OᲓX瑩i6xST;, ^9M[7~gʢgQۉaBhe,.muXH3 ]tcCɤPo9o<26\WG~EE `fVq+[ jn`pÖM'Hxn[pD؂g NEˬzk -6wj LVY->lCʬG$6$ .^,!8nS!8aa VX(6P6>z~6a**-J6D-8-J6daEZbUT^n'hS6ԍ`w$蹹5/XAkM&YOO~?TRVGlS_)j@bd\˰- 7ݏy8bתm.j>i!tPVT3֜j>-Zq TJ$~TY|0aZA.(]17~:sS3YR0ߔ QCjnx} tYT%.SHM-q/~~u?]" JOF6c"iNA]v+(8Dfˌy4H)ʝh0lnYk]}|&ݭ6S+\[,P)=Q{NyeGK-/cCE+_Mv$%I?%%>C݀!jOQG'V;) i?nJֵR%;vޚ6֢5CzX(vcLxL;N F[ݭA߶\9fV#B$Q.R̃@q8 \qu^5\q侊XDD`-^Xm =yzV QҺc^$?-vyOJ+tkemNqf? ]P}Y}FpUݘ | nEv ?LDEq 7vơ(9F!EMXl.۲y HK7V~Pm-TQ mepD`NЪ$1`]Fa:b|vs9:Q袬𞅔鈵s#;I~utr,PKqq#L(#Ē$f8l"3Q;ʓ i}]g7+86K[M!iCo`"1),/=_:#Iʝl\(Κck몍96YR"0NTR{eS*(`8{0acQ-m㪞֌j_;(J5fPKIU *rzג`!"07F `#ʺu;aXƺȪeXH {#hMY`O^qb,Cc9]e`ͱu 08 F)"<1F0H(agi̓՚ Vkv,i*~|fUof? cm:! ,Uk55FL(:wg+ T:r?]~2Ó: -;l7Kt12yyWc셾JNQpǮ?Mzxs г}]&ooV2Spxl_h5¬ KXa|@z,juj8jy9n%l&#%?+bPܤ2ڳOWa JݛĖhj*q%2 v(]`&S:;cף”.Bp7+Y/"/O"'J6&v>ĵ{&GXfnP]X o~sk=W<+M_^}` z!ysv39]"ާ$s  "3b29)yU'ΓWC:/R1ۤ2n(֚ɩYI!\^Hk#Nr\6CU7Tz.{I`!h[ [Y 0f8 1[sF`j "ʂj[. pA޶^rYdAR B6AL7CSgBi˜ șP&g0F;q&E8G+5A9Fb ,0V@gFDLV0zG΄awǀԨ,Zx~-EqɅ]̮$a.#![B8FmG8Ÿ=.D"kݲuUn4+@^rm) D"X﵆1wbJCrYG# ë,NF:Aκ;AAQ 4(XӲBLq<%)IH'WZk bc{jYٺЦ6LS*F^ݒz,[htwhZF "TVMF E ^kƖjg:ƄMw[;tj_MoUORL0.qba. /VbEl7-B4ag^Ep`jB'^{s6A,[4#f sD_<2%-`W4Eܝ2kҞlj5gp 3l!ڿ+*DfQ'Av~mĈhzLd;)%Df"+ޮ؂@*[E)+Iep>B*CdCM((y_ϭy=MCd∶(}j}Z%۠8\$LRm BKaP+[>-)ؗ:8U_>jՂ)ZkB&qnՂÐyRP†(\JDBz&:h!=µ(+Z@S* Hf-U==;)ЄS%q$60>GOš9H{1Xkk)S2 8 9CHE::-p$+⽤3J'ՆlM y*@Z iBImj( mI"&XֿFux$1M@\,Z$ASfBʳf/גR=UFw h#*' XZ"(,hț%EQ*W!& Cc %Mꄰ DHKy )G,3R1@TP[@/G]B8H\BmpT; 7*afKƦ7l(ʛ2u+xS㨹u+BM6s "9̆AfQuD,H1IIwjY6ɛ{g$$q6 @P* ~Gl(r5#-~pӐr`c⑁K7uzI ڜ5ᣲ)/{^E!h* C" r+2xC1:'F,7mvkYwO;ptEMܲ 3K1 NԽk?V!zW!kv"kGMIDcƅ؀j!teuת%kR'uVb2?ѵۉ\紅!o y &ԥ|1Y{23iog{߶q%!a\]/M$J|u,eg.!%r%emK493gÙBc5;nll,5"6Ma҄ȨJŒff RcAeln1RҤcQ|L"];tWi~\Qqg d☸A=k5%#Gh&ap| `ޯ%Oz#ӒuB7>(`dx|x˛.m<^©#ɐ FEIIdpcZ$$e |Ad}#J!ͲD&$C&ܸe1q baScE& gnG;R v`)Cp!ZK$ڰ>ޚ0N̚)e;1/o\6' *o`E.߳kё+Q7z+ԅ QMPӠ]]œlj ZXCCEFi ,ޮbAJֱ+7v^zNJ Z=N:<0]f] p`njE9ޣ~pVzJ5S:te>g5*&ƹA",EetG}w0t)7cm ԧҍ>(/?⁧tI4̳0ӛ]f׮{/鏑/y.:ﶍܠo.!ԣiAXTWjpZ2'JiTg8<2t)%s) f>"%(>pv19tz0Z-0; D[',gEA|)DL\/g:X\_mPݜ2)|Q![g,BI"0gpUB> 0$5ث\ڐ\M Wra< A(Zת ׉[ m-: Pk (Oy߈7(Y=y)t!\9*#BKXC>OyD׵) 굁)əlh4/a15d =FJo8MYO'TAy>,zN&Wv?UbSꐓzpXP0LYxN "b3i&{p L=ftυZ+=ZBƻw[oiɽ^^IkRJ`rzsi' Ŵxp<7Y=5}d"ndba%nC?l,Ѫu<|qB#Rl>J!VK?~)l$ݧO@M];6pgy^g;|,zYt@'B9t;N-?Fς P`**G.awv ]k>h0|:Uu &Vr\n:9hP`& c$)Ҋ>{Ln`+Pag,YJjΘ [~b*|d6O_\<_ d ^|#Uz 3wG5jR,WsGⵜ-ǯ<_[Hi0);f,qn3ܪC(gpyiOE),̳nEHDk9쎻n GP<]W.*MGd-U|w,TrTQIym`b:@NM"|xzYyVo3WE0n&;@9ևe͔ jMhW6!pQ:Ρb.ǰC:4qp!}F$2M0n=nʂk,o w =~Y5Mk^2ּxt8WBU Ϥp^L؉?{qһ'=x~wGV\E0l9ir+[pk޹>2 !GUrjkW" dr KʯmOP)5)ʝP/.v۸mrE=nыM7"Nui!1:R-hőv-q6JV^6(d-ТW!Mjaa0P7q}Nb,i\| _y႒/>`O:,UGay)TbQ L݀v;*7:WTq<6 ;N^1 |&@ bůt8;1T+* |C ᱒|k|9Y in@1EH~&֝FBÎ <+:A.v4A}$Oq!f:jɷ.oO<vÁ>_ ^bp ~MJGr)$~ׅ4'&K)!uR\Ym$%Wo"qUN1$L}\߼'~yOfp?Kb2&!$ Uthabqz~=/{)8|?|h6>8GQǒX{~{\df7e/;(>>-"(q;|oGsZщ.^r3YB  }$C"\J[ A)eh)bPʭH-qHDl{1>REg/\:[ןCTRFSydW1Wg#|P}崙G Ca˺bKX6`nH&ȣ+ k=Cl Xź'1F,T>bm3F})cl ևP#H74* 3T[: k i~g1SuWӲDr QvOI[@~¹8bJtIl:N㳲072/0R4:veޝwF9M-d6y=b_6sĝ#+B J15Q\*D8*CV+ŀ3l[E¥ARe\ m`Ϋsg4-// ǮIX﫽 ƱeOU^ոW:KpMs 5k#NY+|5aaKjO<; toeമyUm0ʆ˒T/i[sU|nVf;[whG/BM#18KtHWMÐaRSg3+r/NM(sp5>\yQ zx9M6 Ialbkߛ9Ya=z'6z8w;*TOVv63_; Wo^g/1Quz~%{ L5ug~ gNrToWoCQ%x7rޜW! AaN %([,"LEAA sP Ru\TlƸZlq=.rUE"_ݮiu|Mi?ZrM3I5a&!%R[Iݝy*tNS22cvIr'&{m 2YϮ,MF9`3-`s6Y=:N #|NM$R'H}p9J]Yz@Rr.ȃN*j. :9uB}I꘩WƆl xܸfFR!'j5xP%fws-ʋGL2&LQyMv)̀R8E cF"xS4 ԲA2V5-ɛdmk ҝ֝YM{}R~lG5?JӚ6+?lGkvӆoX˄E:j$ݤ=ܷjfJ8/_s|j0-ۃ9xAͿ•C_oߚ<?MQK;.gpKw{) xhOڦvyMmSIJnp3_6CD&u x1~ʂtwY"g9H/i EZK\Ir:|߻p3'rrrEr'-:t FY|d͜ xB)A6E1m| 48=ڕɼm7_)AeN$gb)rw]̧='r6nns)^W8{`t U]i;.-_D^zcO(W,kYN\ qŮ?WgI@o|GhBĈ"'V{P2 \rKнN'n2ptV\Uԑ{9=Ry&M3> `8ΝolnQ ᭚N.vē2HIPBO%29A$pF}iUFGhYc9|=j,?jW/,jE4+m*"(45`g1qY,[T'`s"ފB$XZ e1{cpXr1|;jan17PiL`)לݳ]8e,hN5 W?ڛx!Gh"ė"4'],h$߆>RM{(sqo5’h]9[k=33F*GQ)˿ȯIx[$ "SW A,4n= p~PSYK:7y(2r3Ո?+9O(# |5t3lmVp\XaJD6(2aQ&G2@68 TcП̈ PL;]b G?1>VU}hgމ^'"(`3.*Θ9ci l07`pʼnW`I18!K 滞'J3NM8 (uR,4`~ac>ĠrsFf-|s% fO\O@ρf`f9۝Uħ]#ͨ&u><6"$(fLp`?$WQr%|*]PV/(EuV͏_xo'&>)xbZZ| 2@d0?gcyPl>uyͮv ҏk-)/K |'" 2+N2L$4:UW%(,7lƼB{+:#QF9q@>s/5ҁoܶl J$,G0 X3}ˑ! {c8q3iN͖ 7&6CpF?.8ƫ馠Rsk0(PN-irK_81?_ltƗbtaAp%ʲń|?~aE2yb<ͩe}c>Yp ؉ ʊ7|{1mocmynqA;E~-Qy񍧟I#DVL[H.m?>̘BVWȂ#>E^sQW-Z3J@x"NYse#L^ט^. |*sg Cxھ-'$6 Arrݫc >u/rpFbm`SAnc/MVF}ahti4`T%;~^=cT]bR [׎jJ7`i pC,-;B1 k |5Q259de\ Ҿ& 1+f‰(S'6⛾҇XGi=_qLM-vI*ţ!1,yjr -%$yy|\Fkmco8@C͘Z.812\Q ٌhtpƒ/x'LbR2r|:Rv4:UWE(r1{63̐kN:zQq)%³b^˼$HUx^QՈchj CDf$sysZ{GOe_*JXZ0V] 8P x]Ulj˝2t5l wVö˸F5,쨺]|sxGX]X>fMLLx΁H9.a4-qhFݼh |"%]002ӸɄ|] p aĂ²cs+\ һBrD.H ~f.؎A?H?a WOfCV7fL35ԭy͗Te=ن5{䂗m,!n iF=t$?9Ax8@TCfl{AUt>Pa4x}樬XCZɺaN.G󈷳q.o1;wCsP8Nq 1|x wW}?g jr{Kط6vW]3G6']Seo7o]ú1}݋}݋:#gb嵅^_sެn=]iө8Gv=Hp4#5[$}WW$}j[\Ӭvne޵Tj iDz7;}0A|'2Z1S|L2Z$z6Y3a\K"e:8]n+꘯{lƁOelc.3K`HM#q:QBr]h8Q?>s7,1I!# `ަ-2(>zAyq. cr4:i>.<ӦO; ѷ!PSY.jDٜ㪰~A7#X=`Q);ŇѿPYnr$oG_q0OC7+7lp0ƽ9mх=8aT#_7k߰?J&#PhglEM 5pX8Wf[ a %"RqO(D.?6|wv0|;;c7y>q4je8Ɨ =[f>J:'uMI (g|B.ܭʈkʹYss3؍%?][o+9r+Cy`l0.2}d[˒Զ俇TK֥jM'p0K,>XU,VykL<-E-YJ.I7pˣc}s"Qw.}bSRDS(]"dA)&Y\̢^?pP6<0F0_*rƗ}T(uJPQƃז ȧ7 L1MVQUiVtv}851b^GkR4JpN Q"TL&3Df4s{B~\ŨfNG=Gݥ&WQ/>f *K9U=/bbl=O.nz_12(⪣:É) KEṕ[NN@!61k􎁦Ѝ7 h"Ќ‘8 P[ K7o^rA My߅ (8\#L!Pl7Ai^a&{)&wAa2b*_7GaP#/UjJtV-mr3{S ჲ̏qf]Ƞ@wρȘu 1oSݨ]& :x蝒Cn]xtz1eTu7X*5`ӄR,߻ػFs#~ĦW>J#Y)J3(2wZciJB=fZ!S`6̠"Q-Gs } "TCIEoa㏸o2_n9[Ht 1~Q[Ö;-!ʞGqCc2d9Q4zfBEy)uF(Pې_*/vr>E{S,x*_!V:Ⱦƒ *art4%qUȨpBwcML޸7v2%$n#N[҄+|1z||n@;{݌ˡJU^zdBcvc'G&] 9DדDqS M /]B)qѭ2xS!J$;Ɨ߯gqfW*ۃTh1Nܤ =|ķ2ڎ޳2c> }}щ >LjbcۡTJ2f_qt)HぐF1@/j1Ml,jɸ`1̾РT1&,J5. RĹ_g=d7NR!ՑN1qiT2g ٿt+/GX?P+>:Ы}ȏ.]CNQI ^gRԧ E.8=~Mh{ K=]@$+EuEڶH dѕ>N|l)XT5^~H/bh3p*jO? 9HZX] ve& Ęi3^.d+RS]QPݍ3ۀW}h46)9=~f$kXNGkOrz9'5qJRh3VŦTp[#+ x&Ecoe{/vZ\Yd)CT>kd2[N 2res^Z4t::Q,IL%c>[ |ӳU;$@x_^9i=z0P -4Uxg0gO7?w1:hti 3|5*c}l $~Lfa9eƣ!(hZ25jTRNcʠP{uGl!"-K/F7N);Uv EXQ4ٺ5Li(Rߖ }@c`x{ŇACΏN@9!s.df@*Ek@Wq.e=|=֣DH!v(8ז6Њ)\KNQ%=Ғ3TT 2sHmpujt!aDdZ.M&s3ĭ_a&N.$y|M=Ge &iX!r#n[GbQYc)e -Ө#66qeQ~{)-w W:oCFFI&6XP)7 /hQ~6M|U=._ڽ|^-{yeH^pCo-{,!=̧oO*@ D+M&uh6TZ˰OSB] p?.ЗA@ή(q:ˡ"aR2ζ70c>4k>aA?!~[P[(%rۮN0 YQ޳/6Rflj0]҄ů?٧C\ev= 2Ck?-8t?~lv8 |=X|zC?Ҫ58~W;; ۠Р"BUx׫-0W{Gw^ûGǯȣd<4JJqTVR纩lډbd9qr;џ?3å,{,ޠנ"zWȂ8BYpdG%^Z,&QWZW7B5Ӹ Պ}X'TT\"W?2ki)MQDqLϏ޲L]qoF |eP5 N6A84p;&%S*JgE,.APKo)zdqM:S;i;m'2ΧwFf<(sy{V6`.SK52F[X_-v/ɣ={._03: wOSȥAE[ޠ"glE ai qnp]AǼ۔b% ,uU(]x  Tg5owg!g4ƘA.L~y1'hy>@Gd| Cb8ւlC rx͚:;=M*I{2L$,>1Ȉ'fݬZeZNDhSm!]fg nD=<2c^}ǺO?cހ׀"t'qgs3jY2D0B*H2jl揳Z\4-۹%.OO#?f̚)L5ֺ N\6{ ESĥ#;ޮo.JMծ~; !(4hFL"L2pwEguֽXg4 .U<+u##)tᘏucs7 įMXqF j%*uJZ 2bB<+f^y֡ë[ ӐWdzd!rO{!Bm QVΜY]؂=O"BZ1纎eb(UUs/(DhQbڎ+$QjVF]ddVY]xD͟p@;;U ,0ٶS|?KeR?0?4<0V"mJ8J qx8Mt}y7?Bpf՟>,$oky ?<)uN}$¼YL'/wëdq^?6 _UV< Cqp1Nۢ~(ϟt?}k>OOl,>p=c^5R*|8VP:,>礑v}qe}(b~O[C!kL!3:`bHY mMVR %jYO?6i KM}ԯUgmGn`/7;v0sfpD*qDZӶ,_QcŶr,Q⦁LbXTU6ot>]??f{7aF)a'E5d&'ɭ&_|L? D%<<[GY_s8:G#段m7ZYq~jNi xOxy惮XCθ9h~v W*6'˼{:־wyj~a>__\>{'-רꪝóvrVluarE^]Tێǥ+!>U8x} Pun6lMeɫP ;yXu|Z}6V{cQC| ՗+g 0/@IUOރ~<>E e(#ße\\RPp26\L J3G|(L \ɥ%yh;=7Y[EhjC6WZ.f~'Tm<CIϋV%*eR2OcB "{.͜QW:KžƋ7v %)[|c"#:}.jtĹN1?"10⌊l߸NU?QVT;N ܆G#+fJDc"Y":k̦^ؕS@ 1]{\W=^CD́F4\Q%Q>̂ ̂YP TZCw.^B9ȭ46 ͂ ڢ@X`, 6UWd6͏]lv51A3yaM"G]䭤w0* p1OkF>!D%*xGaWXeg3Y*fr~s>u}>M}> 0l1DZ56N1<'1m|C\ i&[kUn~^$ÜܚU_̋T/ |!4ݻJQUX_xXqB Phʵ ~5aV=˳~[o0AvÞgJdTwK[:g%mO` 9)YJs8O@yT˜#} V nYd_e])]) 0VN6Z8U袤o3ƃn2L4ԇ{_dOf$,]gx.2Vp- ɳ"OS&1#n |ʤ46DGܖ7X-$%Oc !ɼr(+XPQJX"'sGoy5 bF't4R_ohǞca-o1^c﷼ls.^]Z>)QV=-j>py٥NjzD1D4FYQmSu,S*JJU$g`!hkh K]|ZfܷeC_^Ψy,s$q79)IchBcđ+J.2M|eKgi!!j1Yy+@'/ Kz?xKH:m!Tݱlf:y-2-YBwydŔ.%x-oCu?sEԨ~ _)/_SrEp Igw9U u"TY׮<-Evf0y^ژxIjmLjj-"UJ~ J^@?^3 ILrҘLfvJ'ZI;T_O/^()l7F* /@'ov#E,VP;PKbCM5ę]U֮°/÷k+$ [.3&KC[ruۂ['"plN15Kfi* *lt1Q% TĶ6\\?K4*Oc#Prl:3q@c( 8qrPm'[Оb@2UR3x{np<t[ɭwޝ( ɥM#]-6%܍ޝ(/lN0-3Uk&ҕˈ`Ts[ 뒿7%1]K=ʂɔʀ:}*K%뉊7?WJk eȣ|p.Ԣjjkhro9"},r||է?Mj3 ze4U Q,[WB'_TOlcm:A1I\PK, s ;=hnx&| ǧI馳?Ok=Vݴ `QFb-E&w3TpX%&~ѿO2JV\./|ޱ5|zkv8dWY2Kn79 mnK֌Q%՜T-* rZ* *$ڻif/iwb%xb ᫉vbWmQD(:a>ɬJSpnEY,GZOB)SYf4;ԇUx,ē)jvQdч߮  h}&0>5 [ҟ]uvvsKMZ~Mjgv}.j8< DF1KFUR 39t<=^Z`炀!%$!ϭPBgzI։ F5AL9&I(IN{C$=[*$_L /v͓=Lro3w`:Z~[}* 10qgY's*mvEoܖO'(\cbu:Xi%k,7Mjag,T 1,q=zY=}`X<η/jr!!i% 1$UJc ַ@BF.Xs?yda^[sAqiAl+7ݣbLUp#DǨӪXdT؂ξ&eF:Iju9\8O5Q&Qc[у#2j;%;.ܓu\̦'3s[&UVlEE;LTSw OЊ'QeφC$mXgY|5LQzMJ-OIIu`e߇Y6@Ẓ&*Yຏ~2W hc=('?_|b@ez?!51zЯwE#*wX"h~%ge5D ߅{t'Uǰ1͗K10qmg/O0U]:p KK{ȤzUeϜ?t6܏b`Ұ3S[c ʸ7EF1ka gd>:9S)vڴ43]r㷲O{ >ʪVGOY6`22%c?T(5e:fNji̟&Nqlxv>I.:0˝bZ9@:2nu)BtF:qLF#ocPjY ח&c]YUq8{@.o~911,. 4'  QbTt6DTvF&aGӒI++f.WܲS/Ykd" oR`t;#K^vcIf, IM0cf=(5;/= @DW'óDރ=_\e*<"術jᏙ5ٯZyge1GrC4d\.*;fHqWa=4.Y΅!8 jf󉂁{PdacJffr LĖ:dNQA(i:Sv-n L9p{J3էkqvn,#̓h0zcG<'I4V`X>8kv9$I?r z92<,'vi=/I=;u}ͅ"Mޓ.n3f~Dsz>y97S1v&5Е[N/4P7óehq4A͆ia.;ƄŃ Z5[Ndhu OQեYcYKmX\xf%ϢH?Fϼ_5YBSRR\A Lotb -o]öٲ`>vxސZnux2%I|+X2ܱLM.Gk9xԒo[ܡr>2faWL:A<<E`{EG1|F LrSV-w f;*}H`^ND)?說 i:*=;Qz}0$ɛ InT4~_{vZػL;v<27 ԔK9&rT֠HBʘ fqiaI GnD4&UƼ~U5z5ޮ'uYg?& (ߍi”y_OIp^)i91~Yo2sHpc \|P)?7|tC_ߦ藺qևߩ QqΝ"-h Iߥd:D5mN7A%Yٙ⋶޼,U\ .tȸ;{;]gsn&)~N fzH;kx· N[,>ѕ^X`gB~Z}ӺC_F!4hCTM ֛ihš˕VӉKbw$93JrdU#EZpTV$._`Mq/1 2{QسLϽ͢n.OA~p$ }BK y/3ܬI^|uBj9:)ٯxDX:HgcΨ>D*G(?Jnvm{T. =2EkPs3.z<дRj#bTzY~w,\>:f!AZC˨AF#` CʵdN# 2%, NK"k[HqƝCΛq6{ˏ~ͥgݭwvxTxWѡ-hn.F@'X$R$-2Є 2!1$P$QGľZ:8#)ߌu &MGŮs9'4Ґi#;B-u|ˑ}9ט)t &N>E{p!b#C?R.eǑə"T"#g3|.-kZt "c j'c gǹf:I۝f<}}^>|lՙ׿Te$8/0 .0*Jm|;%m K9NGcmӠn<94|aGi[WgRJ+oN؝NkZ|| k`ȵ^o(6).ff@; چ.Iڏ { k t}y 3[W <>iڃe`h\j+cy,g{1|6:x֔VY(B,Ac*$f ֺ#Eil#%< DD>t; v&ټF=Đ[ft5T7w p^kE+˝;ȽMKg\ZHG}^@3x}(ߕ;Uƹ 5͜ t!+dwa"܁r~ypVOiQ)d~6 , l]'6i~\uUt?LeyeȻqT/Qw'Ru(@cj.9`{."Iq(}(ab=CX>V;- jK[0su4/X~[0\ d;2J#d#e ".4mn.~\s|Ku y]B68xL9.Ul{s/ey|؞~^^lPl1KA+] .@_u"C"H\y}*K{Eq!!O~Xf7/aCc^捹Ui^y?-?WIg˙˙Y R]Zz(Ǣʠ.v4Io ̒\yh498ANsDP'+I7%2IT0"!cN #h( IdYML?7k>}Gg,'{[ܘb"\tjdd,E $N 9 F1 X: V[fۛޛLl訳^ܹn-sV;zc;n)@k-XGiB?0 +)H-O_n>z%DX)EcX/acY:(4h00``) ȉi@v4u?GC^߹3̉;66Ν$H| T&U-@ ;Z''MV3D{1ѓ/ASÂ40acdqXH$FJ`+b Z9FS+~Jv4jk ƛ X:>LVؼS Vgٓij-SCH'[ӧ\؆Fa5>bB Mp/ūW#xz_Y|2\)M:{la-TwWyJ"ۏ }U畫&F]X5>ֲ©ی :o55448޵8r#EXK ߏİϰ7𹫬zfv}=֫FQSc]VHXU*v*ݽLj:U5 ^4Eu~~Ib\B/)0EUDtf.e `^ũ 7U%_NUwۄ/I$,JwNVM2uhFϽ;;llR=9e]Ԫ^+FB f>ŘNn0OYO}=”8l*>'tn?y()q VoܪTۯo& 5Qbu!zpSMˆMb6B&|)03R.*//n +e2ZdEb76ˡ@o+u;oWv~O>}_$wݛ̰RV0\ѿ3'{1ss4yw!6Bx^7HRJbS:1qG&!0 (cdT Svʲ$kymi'\+|v6"3V1dhrK/phta(P^92GbչaKg'CKaju(wxr#tqǛ6( NT+NG2QhM_9 (aUA< FM j- F:pV5xOuG2SR#kהAd I?9 ] X,=04| 6M$' i ]`^\z:VqE1R!#7r.1 CkE1:rgԩc:gšuXj;5hr?դAd>HaD;iqD4(l-6H 2!"d2*6 # }#RgW؃ǹGs`E&ӎ6 %:)%JG"]H)86 3:e%#c_evJٖCnTqcvc{_<*g@}pPXB@B&v5PBJd59tY^uЙאNVgeRy}"JE9N)Y6<p(,CƅMzYxt^z OߣX0XF)\XtGpamCr0fE7E:ZR$f6FAQۺ1ш1O 9õjhpi$tx_M0.}_|BYX86m=̪go~;rC-gކ|?ԧh*j$ug|}qf 0EQ>V3,Na!Eٵu*,2Mvɽ_0|\'18Vϱz@91xMw$%Ĵn)(tUxw=a ZPٿsVqt(=(Oj0ksUӼaI/^\ah?TY ]'!K0Կ0&\޿o M͓ah>?ILˆ\bĚ  QlMSU~הi?~8!9_y: _ߍB*@LgV 庅g_~ݤѢߚwS[͚o d(a(6|_hW_]̀VCi&`_w2Wz?8W,?O.Zq_b@v{5;k5(sB|<}``Y˴&c6D xb.*04-Θ ,x\8BiOә_ugE:NUg4"L5Mj8qnW}1VHGaPƾQc5ὼ? d{:r<5hAU5>g?VϷolgvsx2-b}_`v^mNym`ؾ/o.?4x(ˊ택NEgvi1rONGMTZ%.~:>ğ_?gj^0?mWh֚oVZIm[8;.@t졻K >XDMn4l_oV}ɷmHf*Ko?Ar㺞n0#ok u|_C,! 6_}%3ibs!d>7ctG*y]ř'u.}BDTrlg(Ѭe?TwO@Wc1Y"s-M@7DY/#xtǞ6.#IyT !(%)06(9%"ʭzb&!0O{i[|vub80'vRzgwZ<s*?t;~(3 h%nw>E?|;>}w&_uxQPδTg<=сѻχG넶=tV2턚n EBas~eatx]Brz3^?^/לN^e]_!u,:{Pcx, b="(NhHq"I6MDd{w?U)!>BZckjacdwv%|4e:`w8(-;|?Oջ[tjlzOϩXcr{20ЬtM_7l$41?ޱ|jEd% %qPX!-cNN4K #E#yC^eK ,mP fCόjc"0eH ޳֝òIBK餵D5Tgb;lII1y`q!0|@R؄[ RPΒcGl`#*:4Up<.VdhP`@zbj;VdS*GI>pTq[΅ ce&WX>W5dmAk`qzC3冬M[Bg~ ϱ?p+d޽fn;9{C&t Ií<V }8߭{]]f&Hs椬&]hl+[Kv cd׈{J27yz>.ҡ͝"噔|3y4L^L(}1<*CDA{ 9֒@Wr,vA>W· 9lO->itTj Ŷ"}"CV{F@UdiY9XfOEɅdE;yL6񋞕wI8+{-='4u4kɂEφ)wrdח9BkLKJι>aqX޳6rW3X ʼnĹCFCK2IZ߯zRr(PK]k3wUWW_4]26O .CJ/$T\H A! :jbGiߒͬ;&-SF33D1YƄx8?؁}MdC1>o?<~t!gzTc`m=6z Z cQWk0C=/\ uV5WJ2Umݗ2-67j9POQ3حeu"OP fh/#xHHLBDnqiKuN@@oH"88f;W+h L> 'HH}ւ=riL$Œe,10BB2pPg($^2 YobA% 9pg H\*!BYҲeIS.Zz Xo)d]֗txUQѪ:;RxT\Ԛ7sQk.jE5\ Z<1X00t%M^.4($Hk#})kߺxu]5*Cz Y@ |3\Tknlú 8c&ϣsj1IԱX= _1bxmo0 'QOȽO'֔%Fn\ݚ>E:!=!=?/w@ї};A ScKBK оtYnݵݡLD %n4ݗN˭!kn1y~ qv *M_J*N6AHt&YD>%b޴#;;}hj7uL KF;}j-b{TYCh_~ɔ-M0~(W-fS`E@e<`[%W@hGRyni A^p¼ǽx5M"{37͓7ߓ]pѤԡb>! KzVDFhȾTS+@d"*kuݴ~Yp7Qox.'*,5" އOw>uk]c,:&F>.Lu|Lt"4x@{^C6lv|d"0ڵ%~-Ю;\ri UR.1YTYuѮõ\lx>sv9YHgkO$ܰ#"Ϡc}7h J(?ʋ|iת'7h|h~gz"H;~;Ky3_ BheׄN'C9ACϵOkk:aި;i_U7o?z%{%rj[~ )db} aR/1"A)V2""&ZH0<)c"'ח`J,Vb' rKjGmUaw_ JRwNcFHm 4wFm6*GT KIw g.^\qI+XIWT\C?f׆V]3PV?$7Z(FF+]VL6_K\AQ6P@lҥcP0 5' ñ~F{(Y5 psybќ{ejH5jቛ B)L#3`Ub:0%"̬0$("H #ݎ^!a劀ه ,`B ¨UځTxQʃp-0EF5cKW}WnT՛֚o׋3_T?O|{?홌X'sk8|Z nkIb<p?vîBM#18Gm48zw:a<`0a1Yxefp~hqJQWlif]XHy2u ,X$q,v5@O `qw wo??~D} 緀y4MAO/)'d\0ꍘTЭ@EUJ???~;\:zr{:1io!L&/ 鐱/+0s*|+-A4!DXX!*v %-lb\-\rø jw+ȇ3[fj wiK`|60 iFo5x8DCĜ%YcN6`ECc"=&1I6KYϯ_c ?ͯ# BYQKP$92$Jƒ#N7:0{!6h' udvS޳=wkgYj;m^noEr=~U[^ؿzkv?(ծfESW* RJ/mP%=~&`hOյ䶮ѶvZ@ϿCG*{]EL^DͅLN GDuld9ڛv(QW j]5)hUG?>)ffqMÀfRaJPqxvE=϶z(r *aK~Z>LvS-CKK)@~  `i7r.Vr A,Rg2D'HkKI S"!&"-,xGI4 Gr ``9s߭G6EVO?}ܾ|}=hh/mȅ_גJjP_9!UŹ-~+((ƺVs[Q䯏z< 8`)dTr$-yN%fsW+9_]+9C ̆q Me&#QHYu2LwZ]2b=6MVHKDgmhpKU8Ps A(mTd"e˵@FG~x;ܷ=ziۃ+F.-ndO+4 g54ோ kޔK7$Z~9='m!>y8յ*!<q]3ϛ5yReh NPvΪwYsիmsFѣפJ@{[ yėQI螨4+{*10 .fdrH8+*wQUVMƳ(, JJc\7wl1A,j? t-Y4P\]o0xƬdh^oQ89% i30!gUm &GX\J8F5FZ$[,-5ìKW$J \QqVX F# ܝ+HST*9omn=, xXDM`$\ =:AHBNj3so.MƷOntv.[42$+~^>Zc!(ץZ41$s|C\( J1 YtֱX+UPt n[mrA;?[Wg)]/ʽ9q8.x"*azf2(J9^nVBJ1vAXx%KK>b Yi`YcL{V|N]vl͉sdg/d1u {q]=q ï?S6.ö0(OqX/yf\ FIiRJCXr+˚w"q9Aٗ Ӣ/M+B[@ae=i|$/zjxӇw\;Ur2q|1Z꘯,KG!2A( 0GueK)5߉П;3݁hV\D -iv}lN<~ƠtaLӷo\^.t>7:UAd:[\G?7ᮑmBzJ0!eeUf*'gDb&Rq.S[-x=F{k] oxCX.s<b<)b xKY __Wۥ fzs֛^͓DV}YA6s;rsaCZBn;F{TrGx2Gg| lY3hY1ϊ|Y5t^]>b≕b!X'|1]W S\i[raȷwMNV5l\CI Kg+'+~tB0-k}u"yO=8r-?]ۉ9׳чmOܥUr;ԩ\ MyWˉr8v;V!qܩ ֜uD trF_E_5җK/;R}/E;SJ 'Fؽ`a ] E9x=-~3xm-FyTf2oDIŢ^*-kp^$Ҍ<6Is#j=͞k&Z<&it0fh?-p9U>̓^{n?Mb/=Z\ % 2Y>=AAND(jvVu:Ivm?Uq]ӹcnm?3wʂw^w*ͧ4jyn}Y4p̩<Οc]rqq\IҏlGZy:Td{ 66NJ4!5Ǣ&) ULkv"dB;4d.O|swNF[O+/h˸6>Y- ę\_jݣ{\#𧇑Z*,\g?s".㔮GJf#X6nGHQSK*Q>.Vڅ^y/.4ۈe۷Feվ2C {Ɔ%oF(¥mZͫS5״p߶䗉 @DkD Z(J)#?s[o?ެ~ɇ.rIHe4o7vyvۋR BҖ;E bkO 5ȜQ^;(vj-$?890mD'_$!a.ŋ.3ܼ^݋ͨ._W'/g(6llaԢ6 Me2 Jًt54Do;7h8){ŏԁc3Ň(qV샣6~{{V}7tZl}fn#LQom[K=BҚ5yXxǣ)6[ Sij>c8rm> V:fw}`^c=eG}A9 }t:&^jjf[Tynhk;'NxhO[gIhGb ߝrw[c%V utB77VrLǑlyt|4,}DKZ\~10D!7k{GmAٵs+noy%dW'7x=ή˅Ճ85i߿yE"FIp**Rqxpa4r7`~مWG0μuIxH2IbyU Z| 5oj1rZ=K|XoM*0/QqDmԠ&D#&Q)Hۦ} 5gg^%E&IK 1Jc'S^m̷S¼=F9q4W4[lF:K| 5Mm} Mt@Px qLl (`^-k3c ­/k1`)1iB| 5*Ye94D戶Z@IҠQj?](TaZ]yF᥯>豉 N4iC (`^:9*(#jS艀r}MBksqN37ieHJg*F/bv=QF-Yd¨24%` w2YN |崗s chjy U c |4a% eY>iWFEM1@FE+=1q?><{ᵴȘq|j-#rZRe.dX>(^_a>Ijo PCr679-г" Š46ؖ;|gy?i!gC`{Zߥ'ebx%,E5Bf;FuPOjnme5>2<#(kg V}Phv/UʹzMR8gy (`2SyhMxL,s۲!Ixja +ƪB Nu8n6xPQuPy^|d@ V%PDfxۊa U](%c|_ƍ AJB:ţn.j0H)^Tjb% Jj.;PT>A虨B7qzo%CA6)`^132$, GLBDWv 5׬Σ+BJI@Ȥ TFt*+V,9&ĥ#>3#癡-˥](`ު[$P_i4:@C5-bۘ@v`Xa@flFˢE#NuL-~ *[ϼ;_>4AMn.ƕL 4[9-~ TR)"VB ~/ }\>,s60xR)5UȻ.3h[c5gSnHo:B5Y>}>{=:xy&m?^Xq"*&h+f|L.@;T \VA5Q?[I:ԥ(b<_|{]ҹG]Ľ_\-TKnT=,J1ec. p&Yq6|62(P rjc^D/w x?.7R@1g:53Wn nzH1B3](ԈHuuz-I hYwqL9)icB1NV?F C@ %C՜x@(o٤BV{sFLTE 085 + u&vPyC}ʠ5Q 1̈́y/]DFj0τ}+I1#19hT~P<ڛVytDQv=9OrbΕ9.| 5eUAJ4 QK<Ge(4Fu렅j0Tym1\-3X˴B J>+ u]u9DS,mwPyP219#'6[hqcsAuPyux# ӂ!D)Ǧ8İ(1&BH?{F!_ض;|,06sXః"!Xv&+m%G< nzAVGbՔ95I&HfCU6U\n3Bvl&GfW#$הлkTz,gL*F &K](R dPmFaAhӻ+ߕp ELV~eImm~W,>< \Z jfZ 5{,2 gXԽ+;<O 類X 2 \#{ rJC=4x'guU>4e킒Ɠzkm8ZaQb}ij^۵sL1Շ(7>.I[-&TRՉ##?S 9SbCcJtPQpT &YV'r',Jt)xOL'>/<$ `̯%`6ū;T֟(WNQkiwf_l_sF3Ю62ًC!s #:AN= w76%SQX2z5<PKeE)Mݳ ؋0HזҵQ]e=bpf"w;3&=huha@@A ::YMhQP@-@7A VIX1ELH0 eQdL=-dhֲ1Q1$mv[!A{A2:5d(IENC(f+,h!C: 6HCJ^KEY֏5qߪ^^<66>ە9;y9v鹬xYF~mavMR&KY یKYW}W6+]h I2Z^ʦvیеÁƯwأ>jz[Xo7\5IAl `e `>2 X>*tQEiArNq(&B,mW9Y5P-~>3a'#",<.sq]@'‡ Ģ=!{8|%mXc-׃]C\5XƯt}C1~iޜYB` }8 {sN5>ioNp 캉~Bσ bÉ= NOxv̊vsK_MΩMOCkbCw]G< ⼵94Wݮ{x&da=wZ{'5Ά\?~ػ{k䫛7>/b;ڻ{ջKּ ox01 VV]MX}fe|{u%83iqgDe?|!AU׽kÓgC jntCZ2ZB?*78~Y]ѷk+Ֆr |?׽Gwe#be/8x>nNJx)1Fk$Ѕ!8?IQyY)= P9>ci|@֒>ɖ2W[b=ʓ:Wexkº h2]R툎RVhٮhώ*RX>Ch!#$?24m#Ht E)\@' i#Sp$l>$@`#X`Gzʩa9 i fUKNaXD J0ʳuY\?!CjT^E}=)_=7:{mVLOpagH=v\e#ף~y;z'oCYɵ`_ܵYOqSzjZy9+B2i`sC_N贌8rt:c{[zm2Oldvy̎au7?c~~oIwo-_k1kBy eaEYӛ_;杆+1i=1.@V}'x˛EϝY+UnڸC&4Wx{'VS=]/0Q^o| ]`;U] 5PB<^1ӥm0v5QqtMܯH>aYLHk(iL:FeC (B] $IzhB4|}gd.w"dBP"Yz/EGxǮRb0$uILَYz@Coi x1G\4Y҄}0M>6hxI6qaOo-~7o*{o܅d5{ѱ!_q[=fE9qRo3VwB >H0W9f>a=U)zKMa04ƉDOS> 9vFo=?EU}.%'F/qQ&"úRrZ2HkE (r,H6\H:H$) 0 H&5@L {6-:Cgc2Ԍ>qV3!J^ڌx01ɋXDT*bBB퀸 A|kH vBk1@H:ҘDĒLf`pT %vNA7ӖYD|B 8rs|D$,Y)c4\"d1)3#8͸E1b-IV˪dy dD ' VH7[JMI /g_~鯯GShV^o_2߽2J/bձTѲTr!R'T bv\zDYo[@R+|xy$!lPȐ*ӣ`&]d Lbޑѻ(!Vicf&vAPd|ƭ`YE܍?IU Jm ; ̐yS' ԰0ķ*-X*"+vHS$uaegWI̎ɄKM):x(!ʝhJO6vLҟf,)zy_d[֎!RhƹF"F+>X%f , sbC0)+49v8 )#M4D Q>߉HЧ~sY,GZjNGs"mvaK;t'^ /!o%x ,й0:(A&tJÞ?[` ۖ-8)=2{^+w74:!&]\$c 2[)bŲ6Ȍ ͅn-Ho=Y" g1*aJjK$*$Y .KZb;T H*;, (iSa6FOv${D(͜lfd%94MBR)jĎfv"d/,VJP0Q0\A+lމf9*( / "0cCl mPXZ AiU%ًY$XA&;%7i::iD&k3%wc!~BXO.L1КF/w3h 1&$b#q7#~(@ Jm{lE4b-tMa +i,#Ũ#тG1KUӠ^"D S@BYL`ND3.gzX_[fi@<;@K;ya*2%j.")6IIEʀKgΩy˙u?HAlWWTU*XRXPʧ N D́) ev K'@9HԔ1т)RCʘtx} ؝K v=n 8jg6g]F "ZeiaSW.[4,Ȳ7ood =!"rR~8X{))Jnpb΂wD֊J 0*ߜ-Lo.w ӭe-y +xJF}—×-.3 c<cXօC;U╙bcw--/+U{P;In[ԓTjnܷ?amxz:pj{)wLRn^$:l]GRI}3W~}cfk:X<_߰:ll͖9Igp=NYlΜRۯg;6h:j;yg~d0ƼSHL3!մS$80n4SSz?l/=SH9~q$ҥ\4W:͕Ns\4WNs\4W:͕.X\4W:͕NsJi=`(TЖ2 m)C[Ж2 m)C[о9GTЖ:a`}ܧGXVtz◶z7 z)9{(G<=L31S1]J.l [–e [M [K—.l [ԅ-Z–Zb [–.l [–.l [UU–.lֺ : ` V TF1$NG+;Bv3gߛ9r^^)bo]R+Z5Quڑw=T-U|KP{ WC5E;V>&\ Q,U|YQyP+++-Xxl߿{U2=>GA& krD8bFL+%Y&1H[4\;(cY_'ݾ<[<0aU~[CK]9_4tyW~9"j;0< oB^FG|PY'!{˸QW$oYoy-ưm 4wF0$`)  #`];rsZx4cWŎ)Zr]Q,Y@oY [eĠ1tzpsѯJj TS_1ItFB@z?2{ga7f)B* , W?-wwh{n(_p2(:060k5f,`ZFL&4Fs+%"[?#pټ\fu*6PǃQK✵QA9#(3B(YMia9N?y@A,LWaa"2ٖEŨ-pT(UvC?Cltrꇺxt=w),Z|}u&m\4$s[5ۙfFCJ}V/M(" g#Bpʁ棶ěU`̙ ?\ VTy'Dc7g2i7=zX ?f\d 7 $N0+D{##a:0- 9rȀȁeE.qVqR60hL> 'HH@=rɃ4&=r+jR2㺺EXUX1teU$< bbeP%m4$̙N$*8ʝ7\ 3f4j|sWL#,8ZI}iw:.eݭ^u5n,;WK39I/@ li9ݶI5|zzqޠ9\p՝$G&.tDV"DUJ7;˜,MeBcu @/`k U cJ#68R$#Z{?\Tknlzy^ٽe~ C] vyQiiq;j7#*1ǪztU$T"_iY"M)2R}{ٜ. VXVYQ+8!pT*-@J螓:G/o&ڙi,I^Y*iCp@JNe)'vD%Ht"av:≷G=gq-8*D=&kґ'%ARbCJAt JN얩ZxE>,r&kUsˀ.& ʬF]MFg쬲[|c.Ni{08Gw #7C=N,S߁wJ[b[&z(!瘆N6Ҝ=lmc޵L~+ד;U -O -n-b6gR?d_ ^e7EQBJWQB9$mӎGn>k}{cWjW '{Ov?K_ֳ\ )1[Fԁ-fVqfgrJ"o'$\.%IV<4R")9IE$s2JA(ai%1#Ay~XGcKP@N+t̺ [Ã,Jf0:DF^8w(xT+NB!v_K5H#fMHj'Jz+%G SaV=S3삑=sZ"e2F""x99LjvsL)Ԯ/I8߽<ϒ=y56ƅ[=77A~s[|>ӥ|Ѭn\tnϿ]ÛzSVU/9T0қ={"bH&WjΰC 9sv.Swf&\Y3xu^}W^N[lBJrTarj\wZo}-QPi{vg;{צ"ytV }p>>_J֎Ti F-g?~4qÛ7'.Sѷ0Lѽb_[W?&W~k/|_N?,Fp狹2w:aoDzX~3nc>7w?3h_ېyEbLC=l`ŤŇ~=W񦇓m!8 LҤC0tcȥo^ Sz4b|ähdzrח@q9}ۿ}7LԷOۗ>8"ڵ_dzAX>}nOwz;lZX}7[kKqps{NAD9i }Vw՚obѢkCʵ! m} ]L`j;R!]V(54s9ҍ ܷw?^/6G/ H[g<h00+uq̩І=B"qsL$#鵽 3R=GޙCF0,YH*ō:SM %Tt.:yu0>.h]8;st}|S:"V:E.U%~0K!t홂?_ʶz]qE 0&R.hec=荖N 8WMNVAPt\!J Sΰʽnx/[P!lY7SX̛kzxCS\3^+ZQ ~yBٳW O^3ڽmɞb{KmRSYzDzy./Λs6}>ڝ l;S[=رLo\vƲim؏ T|}{S~9I %oHd]/5e&۫a+fC ʎH?Z1o 2Zue,F_i;bI(9anqНV^Z&6;Bhmk3kt`ks/h\ ^BiWPtP# lz}{ GOΦo!MDW[pB^U2 (+t~gK4}h:Br%xTˈxA;ATH@?GvHz ը|ݜUӟ06tJ ] &Ng>N5'Vk9-ݮ]5L1n]?Ӫy̿!63b`.zѽԞY{ֺi0;/[\thwn8ή~sܠyytXw뎭:=j;hy)j]*s T췟Er`bbZ8f1&ܸ彞|,&ҝ-. t4GH?uG~ЏWcҟh9>[ =Bn9Sy2I?1 kI?D~'XRO%+@#Qm~Unڒfȱ\yzdznnǹЏc 14-#zxY/"]9WԗΩ| 3;J`d bb>q;+lj}QJ.vJk nnʛlo<\:w¹nyGYy{1ETGfGg4_ݶO{bʹ5\_bYHki]\l0G빊ar±$:$ 䗶yWMa_~/=Ugc%eS޽݊ X5_Ď\z$m1x4b R j4"r2£4< 8IB]Zk3&uv7hE)GZGH/&&`ZXv?qy1ɔS,x`lB:kf8 A"bF1  U*B#%G^="LQ\`/# Ъ뱹IX rLa- iɮYED ⁵kO㊗*eq8-gҎjHTK@yUmɤJ!`,*si÷*KqHX8b YGZo(ZI\gD-昜* .+mi4dÜk28PHDhl9 -BPHiHX|o{ݎ)l ө,,0WK +P $S;D6I : FcT܏)6 ˥tr\D/@B]^j0g JC+d Bv |La%@Ȃ@ #hUqSU5J$Xy޹TEoQX<@Aa᳈#fT;pI3Us2lJX X(i9AK@dj|7Ud&+7/Y){h*5c@Hqps$e ^UYo+Eze̯~[.PSAHeW&"[`Rª1QQ@IBP thFd0 ϋEV #545 J1 ȲRo iXxኵ&3W])a4aȀg:caiF뗭tK-F%ӊYUIc QpֲD9by0[*/L-\{Jo]moIǁmZ$L>^x1u@KqY=P̴**Z@L.HUzD6L8&8dg찰ڂ|Q cA1J# I2H2+BL燅Us4n"AB}Q@VǃV&R<oĭ MX:;**@s.a;͊qf[&sJ $#a KVx]{CL!O‚.&jH`QDx-'G ]rc^s@ 5JހwK |Y]!jUW:T  &n!5Kp VhGc3"PG#+КP%l@C*mQ1H3Fm,0ZnkT/8tzH2985viE0QLh4 Khߤ5'al 騚: a@#4@W jP25G-=XuPX@KF039d@E}׌I'9e !Cԥ^Sj:8,*55p78%<%2`rH|( *A[f$QY.2!S : #*|DzBM`[ՒNJaIQ!|+dE^H"\x>W+f)7kU+ `\ة D8RŢ4xT5uc h27Z%_ `ZZK-ViPhX\wq^xbI^ӺSpva陈Aa,ȸ'&odMx*vϣPM_GUEeAIno%4vC6\ac P0s-/Dy dQj|LzeoVVFpo{jU6T[?Se٤SK}Y{$d(FGD TXܚ<2gtQC!9xhŊO%L#DY%Hp!e2kuJ-So!U;opzE%h6 ժbe5eo ρº'Yko%O_Ol~7dwmR,lUoUmp̯My Ju5_ytޫB[ÍœsoFU)WvzlcU:[Múly&KTzMþAPϒ"f?HNN^WҤ*=;tgh_ G-R 1GL=bSz#1GL=bSz#1GL=bSz#1GL=bSz#1GL=bSz#1GL=bSoS/sc)aeԃ ^CW`1"^ΞSOVC"wR{Q*6v F"N<iCd[#3p],eZ[) _2}]:yxz:z:Q?ޢv_P*>iISSGgQS3p8~uBoXl$B q C8$ q#HAG8$ q#HAG8$ q#HAG8$ q#HAG8$ q#HAG8$ q#H[_!)5I~: q7''ix qS q!zzc&qV:t|=,:_`t.+hu%x浦?|]R,hPN'#FxNx==C<=<[RVPhrIn%_g]>]}^{JXQЀe3*μx9?DFӨ>-Ė>/uk\Df%XCUg/ *3EQh~ӯx4g~<=HjH~!k̚C ؤ>TyojFS"JC$խxŹG@ c Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5 Q5|+D R;=L s#~N&u}=@I}e堬YrozjTf>LmMU >.xܕ)eO(iEo:J>6MHixRǃlBR% [HgTzjUK{gǥg!R.q>TޅT܃{=^CYhDwѿnBL`K@d #B`!㊚L HX_FpPu"[1 :8)A")A}?{HXqUX<&/[Qm}[Gql;6G0˙zon&Ӊ`";N/2p+;G9L (%UNnݼ.`;u&)/RUK86z^Dm,zƟK6\~'Zm=f(㻊w)E׾W;䴊WMh)VB77A'jwsPbF3%)WA81>`$Hh[PX N-cX-cg춌۽imaޭ[h.%J}ҁLH# f'w??<Y4j[.)Ӭ a l$`2,':5Am MF)s+ELvo0_O +T;-r9OH?a]!/iFGaB!QY {}s W4Ǻ:tLÙ33g&2`s[52Q;8Nxi|QS;+^({E0\W{WuBסY0;kH!3WĘH]P2sk6Y0_yȓ;jES49ka)dǣ=qVKؓ59]Hi%u+95А8ӂ[x*P2'>n˻P3qhY+BPtZ#'6I2,0u6Hn_i;E2Gϲ֢lrX-ZhhzO!)Z9DY*h"4c `ZF$@_s |=v>V 2 6Fɺ;+7HpRGg^º5Ę/(|CE6/RV/ң?ҴDbp@MrW$W)HQHKq^@R8H)7>Ze!po 22Uб \%N_g mNB쟈(㳴Ev5*k5N{,Raڿ|+9g}M_joRI7,^65o{CҖ؛gfϨaW$AOs0Ի@޽@2}%^@ژ"4'9_^+@B$˭&"` $c|3$RH,)ӟ Юrτ1"3Vs"2͜i=FS\)8EF'tvՂw`$}٩s+^0C*$䁒TP D"U%\|*{siQgv&G+ud7-6F7e6,񃇯ۧO_QqEs M҃*c*]j +9OG@bL޹4TAC +Th +Ybע -7/h0Ȝ_>rA#zy|ƙ(}*K˹[%WX7WAQMyʐ@sp.% #:ZH$MuyyWuGxeL||޿VBnm2b]mo:IO@ k?뀧3xz:< g9>UdO.Xov>hyhv}H)[ݞ>:O7#lj.xt߬x~Zz9@/>k,//z,/YalR!'ls!dA>jFߚ~A'䰜,<#j) \ΘjqcBdYL( O G =~DZvTG _TX(e 1. "σe!BKs.9 _Q:0xNzc0e79)BZ{NG4]oHW|]`ia 82su)G$&,JMYb[$E~Mw9;ș֨]=wge%yɵPj7{WȤJ)4hNsQ=r* HRZb$DIuBXa1N)sAR4)G,Îqp&$՞G PW}GdTaa2%w)"Sރp"JFc&B#?% No7C*4JqTp.#6TȰ, Ƹ#aREbD h$nJ؆u SjZ.C1)k1lC6rb ץU >W'Eq0ȵHmSjjJ\eXZ2-?)cl|)Ը ]#`ˣU:tߢ=%?;r1j0=Oyެ$ړ+Cf |NNa= ꟯_5 1ׯÑwfaa(zpp5<` ~?. dG.O8%j Q:EM@(KOԁ^n3~__m:klWOp6@$X,,Ps$tzm;foy>,pnauT-x^g3kgMl4'j(3x@OEÓ"+սEIX &9 ::L zz"0WO͎tw\aVeBa-YTXPʧ k~h.eϴ,pD$ &x),R:P3N}  Zu7l.oi C2-O~*Wo]`ppd4`qaJm*#8j(ߩd#"%I7Eh\..Bl̬q%x_oj1s0}i.3_c&7/+:xF,B0#BXYL^jʈhA ##2&"}W[|/ k}fnMgv돽t9ws..>]oPTYeYnjJ1~]&yupn,/ؓ}v,GN  8u"cNgpaŸȁy;#f3r`F@h90#fu|90#ڌ`fȁ90 EFȁ90#f-3r`Fȁ90#f3r`Fȁ90#f}VMgGվXBN_R`A/K6]=*`r\=!,Steåh`U:[EĖPƽuH ÌМdOC|mnWקӛw dn6?TJ!Mi5o' ĜEyb=S.x`Ti7sXQG~@xfʦT2㺼E10X`q0%sH[]F‰uXTv^z+c/$zt f IUp;oB!gh՞2#,RFV glGbrܑ'}MzJvTϓo!3/ox{Ky*[PΧ;>8!ro Fs c "˯^ׁyG-,:RjKX"De TaLfEwi:|c0P0vY Y@ ?\Tknl61ZX{OY]K f͖ց,--)FЅsc̱rdZ͞{O2lܓ;I ּZIN6J0lQ)Ļ-l^RzO}$ѥREj QݶXerS4E-1m1+҉`h7w t0=ÝϷZl_C2(,v0k5f,`ZFL&Z i?jԷ7+>}fcȸgYFkt@G-sF<$R"v ta9Ny|vYro":ވ쌜".XK0fv¯T 3>9v\j09$%^OF)cEΊi5HFǤ"9 (ai%1  Ay'7{WkJ& >(͍ٿ_a݅i ) V 0>1|8N`%qHY9Rd?n*]zL2:<"Fk0I:|j0HiD"yXzvHH9 ᠌QYsj1"SJ]īk_Ίq|)`~ƧoN[T@.FkUf8KS1goP\L~ t?+]U-oQ|B4M)'ÅQ8ikj V)D0\ )(ԝ(\h $u9=7LN[lBntH\ƅo`zޜ.jX1 L&q1צV"}ޯBgg_CevJqv 9W'ڈ/fb>M`LQŒJFNӫ뭺_3o&遷ˆbzN̕ګbwqٌ-Ջ]$*Ad\Cn?TBmObIUcHco}2| F0bZUf6У3{W񦗓m~%۞:KB2J0`?KB1ER&_wnpS<{s^՜.\O|ûoouߞxV?VX'm$hh~} zIYޟ;۟[fNBLj@=Q{JCb}ORr؂;ImXSRaArJB8ax|MHzioLF|s U"Ɉ՘pg &%f` _kλoɃ[7NJ$xiit\FpQL":p-CQ$T_шPQGGQBV1 {FS\$@Q:ewI'(=EQ<1C*&HͩQ`T KmB<hXO blGGӊ ~~nYbhcx7h`\9/l }4Kc1]\'n F6A*c$LMBzNx\}h(iP=zO0Z{73pH Vo7#Δ@)D^a8wVLʦFɫuX "76ϱB76mγ~@)G0h4yzXxˤ1[ l&FΪ>}4b[ceFn{͡gޅwW@<ɷLv(]qP7DYsFwŘzM%㩧k,J<+a -vQ;eB @V lwg( ;Ti/Gq-,[UȲ}u%ᔥZǔQrbl\XOe!ؤ8!Y09%†P8`9+ 2L b a9\ 1PR.F}EKQ, AZ(NR#f䢷t&Mm &) |0 xckQ$ytRGB5 m؏x%x0mBDf_|[j31-໋ߦ5/ŏvbK/ %Xlдq{h,:*X;G.(D!r:D$P%OcX2tbdLi2z&Y &W/O, Zĭ8x"xn#:2(/p<Ȩ{ԏaWQ1$@_G_ħ JXDTk96NI7,$[(cdDG#2A&Ɉp-b2rB)&(YHrx7XJ͘WPin<>^UrXB)h12{ 1 ﲄ TT)2:ϵqD^K(gdd: k@$'od*&[4f,SxjO⩱5f,S3 jOY<5ZYo~n;i?@Ϥu9ʓ2W9ׁ A(3c%!\zg'2$9bO1Zks Q%fGKa4aj]Svu).BuhaVS&0._A݇~=4,r೫z׫ >_ͬ3^ Wm J\ÿZe4?'#h FA7}zvz^-#=R_R@/ `l]Vؔ-4q) pG#QH`Et^sgvWNDNhnj@HN)m%Xq!z.zu0 zdrAC ̨lC1ʡV/~R@q0V_>*<tqKL`s8 6(@Q,>; pt7]+bvIE:]0KXvvWq2 o;cJOD)xm7|l;nɼS*-m֙褢碯vf{h_]nxUq=q8r<|lПhxˈ<:K{J>b\D7)% h'%!ExLn >5'7{Fwu /\׭nǾ4y ¦U כW,k8),Wxxg'٥q<]D ~\ KP$XX8DL!Lx-D fV,ZcwE>2v뎁ޓ^ 6#|׷%Af/,JGlɝw (B^ `~|9gOF} l&as e[&i9W޽*x:{WEeZ1wAl"&$.7>ipDHӮ2@^+\_Nu-mkf[֑# xKιh2.QD%@;;h3Q#5"ʭu)/AA/* VUXVYQ+8jL3Q[iTGTLIJC/i#]V h@6ʣ2JǹۨBN9 % Ě*zEtH{iITdt< Z2`aI=6*64 sf;X#@m1eZ ֡j =N>%v/ Y8~{HPhxj" 5yر=׸!8.KGe"bhb,qjk#/)ZKfAǶtRl8h8s{g+_ox17ղcC#E!i"81Kqm~og~#@*q76pp}<+7w;\a'n1\(ox'*Fj&&8a.x!GBDX &g F8*o\6P¿x8SY<.2/#(s3V)RXpJ`Ewce(6 B"i t<;^kNE#gƫQ6oQvܟJ>|1>9|dtW%-,Z~}KONSntgQysQPT+o?0J+-\v l<;c0RI/)1CaQjBHᜎ- 'z 8l :%nU Hf &PR5c1rk((g'A~B?emU)iDJg]SUGW Mх[/i9/F8*&1+=j짱٧`<563L(`l ">pJg*2VdlUƌH3jT!eAryͣ*h6aR[}XGHZ"i۔g-a[55>Zk?~'Y66r}JɭQn%j4{OVjtՓx[ICzh 5֜*^|-Bd %=i;I~ l[Zy=1ɂ TՎFx:`"X\ $ZZn@b,Y~!LNglYVJ:SGx=sL}t%!JA;B?R;_c!)dR5j=g9(}7$&`{O ז)zXNw=e-gVԎeh{P暲-Z.$W&r0@Q@ I$#2%)${MO y&48ƣԈvQ(v74C9=7r6̆JK2%_֊+:>?=E;\1M^kY^;r -H:pk$Vzp*Y'h"LZF$7yK~x9Y®d2xP2R%ϕŽJJSTJ8ZQP"EA, cPQlLMY2SUԙ%+\LJ(^C^ ѯdaj ?ִQji fXRW*}$h%FgT m@Zɲ,HI@}!po i*tLʚT81ݞ Bl~IvgEWravI!.Gz8ӥr1ů2(Z8UzoGXMYѲ30LuVr0 fHf8T!S(h%8x M=6WK|]*yDgc>''JB9 mW8sPM2w̆*+~WչmHsG^~h{Q)6:;݅SƿhYś]>>l^/`T~6]Lӻb_W7ͅW^/[91`2X%'Q5:r9i ;8뜉g:sefyBOy.~6^zsp:pt Yg5k׳, 8+LFg>}9Y8:k}cPJ F#; CǛW?=_o^7\Iy <CYR'?c[ީYIca |h*c_.?cfpD,Ty9>Ef_AʑN:?q5-ҵ a}@]L`y;yT Q!@xGAJC3~OYUN1 v+\oHJF8o@ J;Ar!O<'APPB*FS{fڥn>z~JR P,Z $$qRCZ$ޭZpR"u |_N/,c@ҘyTƇ˵ +!Bw 2$%} IuolH~M!}%^mwx76]wtCEtzKV_(pͱݠ#FI5tRox/8I5ec`kI (_4"pTnQtPJw):o4"EFs%;TFD IW1@cF P ₁l"B< YNQoG/+'ej?B=+!88j`\9/RͧOiKoG=rJB1XBHMLBzNy\A"S4ӧiR˷eT ]rnžّ}C9O_>Ѹ FҌ L `E^ù+̷I{\6%$$ <% 8's9,T^ t$&4zŇ[kGcr髰.{ybO[3c6f.U8\y'i\IymBwGH)cX] :g͉ށ{5|ػȋ>RKwIyxwn.yߨyĊxN_35o;QkO}G5g?utx-\fkϺor+ls0Ҕ)si\2̥)si\2̥)si\̥)si\2]X2̥)si\2[2EƊEҊI+MKSҔ$+m;YiΠ4e.I7ڥ=(z-(R*婰1$1I" ^:m 8a%IYI|T[KkkwLc24C1sɩ/m`㼘~A.B~c&v^ڼ/_]-/_㷗 bO.E&|zᷜuyG.O]jg-2IRgnL(KYΚ]x9 2{q8IS&t6J&[Wc]}搯} Smx/T~4[M0{QU7L./FV_ړ}m*ګ#*bNj5p_^NӎφqVp?zWj'υOy٠yNϪ#枘 g"9518*E'c=\Eu #ΌHᨈ+atL`RX32<|q󱛇]x{mV;%PV OyaY/OWGFVSi\ȮCZǢ `nD ʅ邆^%*bh(b,qjkC#/)ZYY{#gߊ9RRG=ŵg+_vsuo)svE,weYM&%hSΫ:U˖W._Lm3rĝ889`JL>,viiK2FRtW45/P]5_o'MN`3 "#me=3R{9St "Ad!V{Ձs^% 5hM=8L0jzGL'_uqotM|-JLJFPJҠWIi*A  ΢ %"Z(\\+{A}a\&bv# gaa݄,c :[x `RB*lo^WJ FpSmM5] :kYJPe⸏ DZH R8Y!Q"P2b #9įbIDy@嘈3 AQ@n,{W#QcT?8H9Ց~o;]*ف}_e6Qi?~~[oCq|h&ySh.zkdɌP@spNNd2`xW *Ӄ84_-uq䍣P(o' +\?n▣Aq(k 1TWS=:I ttpD?xԞ~螈q{a)q-l TQ,D$8²Q H|}-Dհ4^3B6a )bPřT*rZæڀGb Uf28QYPkSSǣkU/Ճ7惋E`BF swyU-]$*o*B"߬s]'@ۛn}tzW~k]0Zj~ ={1iG&A[Xi#М#w7boUߎ]1E6,RAJYЉue~0UPtϿ?߽_㛷>B$>^_ބgb}R,?v@.LⷖWS97:P*n۟{c3}|* I]<7 Ea_ ݻ+z4Uu:$´C RE+Y&oŌύj Cu=GAuUm{oc/G\g/Ij, +k$DFh.ǀBTAkxB-D>;Im.ee>Nq|s\l:QkJH ^r4(SA#1Ua1Dd>Is3v`M[`]UWzэ˾o6Ff \ ҡ4$M,#gt%گ?\btp (Voc5Ά6y굲m3Ntoq2vtuV/[ޱzvvm~Qb@iR:lЪK>7jx%QXxtJ0N)֍v+*5V별lu ozr17a /"^I,막\d0FU5@z˫W|\u:G7 'huá$8|j|iѩ\f.Ƕ+ߩa~`~Kyg[mX]$ߧU."WUu*;UjA+;q14&z4 Z)a0?@S㢸TEH!_ls.tAw'Vkba9{&SoaY ˖=fx.O@tdk\ȗH^# Ŏ Nn"9DuٲϊmPh\MuԼ̚=jÏ0߱ ,Do2PF*Ҡ 0猭@b'Q+ؾGW 8,oIR 'J;pN6M &]/Mty{e;' xzz( ._@Ng\[nR]]K)O*g1*C+U fUw.|Sk?l]>.=>N?|[n47;\0בjuaG;7=bš`6mXy]xq{+?-F B,.`$NRc6XX:4Ċig62xXV;$X9\1/|tav4r13<ƽe"P,QN(F*<АkHIr1@ePdl shpZb[VVel(LcwgMTXБc=\" /(腴b/DTWC2C AlI}&aMk*|/p8O bJ9JHQ$8跭:Iς%mJtXK>tgY i zz}oVe(Y6ڽuy͕7JJׇ^zºy|@x戻ꆊ4KTXjnjN?-ڕe+H9.E_DN" ,ߐ [JY ^)/ K!rkiN"PYJ@< 1[p@1sa1e;!TT1MF~G׉~ښү ہG@" =(.#O%0JGF)8rdˉ,QL7q:!ӫ'SE["!pIХVHd6zg\\|Aw,}H(ɸa(XC=0Fi rQX5bCMs)c *8UR'y! `Ra<Қ3O}C,8Y\<ڤ /A3bDC_8GGJz̞ڑ<5 9HGՀ=+¨[)ʘ$RDz ^ZaNX/VŊjAG bKQ̘cdf0`=nܨH,a0cFIP`{33a lM񸌻 [ or\m ^>2<$8$9"gݍR@/% RQA/l^Kl׬'ϽUqۯ"m]*s]^./XUke"㗯=0~If}mu1?:ɇSa m=]EN=HA[ N'c(uc%&y$C vPX Jy [chkNϺy% Y\KҔ{4uOgwN ?9& nO#UZW}*K--s5,MQI>&}X۳,Hy-8wUenumi%7ݛ2!2WXf6l3%:[h]ٰQ(y6QE^)nQKjjӛRou2jb6zXExYf-YefM1*e\Ik#洊GNZXG!RE_m{z9h@y6(jOg6N썾l/Ŗ C@r:crJ8Fz9]LyII%ɚ5ͻpHioSUմt} Z&5Nan5tv1fla aU <k0:)ԞbJ 3 N»yweޝȻ#8dgޝ̻yw2Nɼ;w' OMr@8 "}(]JVm]n*{i|/ )qc1L6(Z`7 _+ }k,XpϿu%T7g72P|Co(CdZkc=u3i@^PT .Nb%2;57IB)#aM ?]o_C `hhbҧh#̏So8 5~+ݨ_L7w+҃cK$+[F櫓Dil0ibL?*)ՄxC,/'v2ȲWefJ)4Z9T l)9%)$"4AN2[lwDPK/T0iCN0GQ$BqA`qQ\JDSRr$^`TyfP A53f 3jfPv3g̠y2jfP͆ 33flA53f̠g̠T3jfP A53fՌ2jfP A53f̠T3j<< Q<ԾÔ}OF_kJ>o #@K%*IKm)JA襄,G'~+F7n'ekX'rn Uzga B#u ! 9&eqAR+BdDH:-F<SDj)8˰O b%3f8c Axn]X>^{h舮,<7 @LZ@}3WSU3!ho\hpM.|ͭd܋t&=1©`j1Ŗ%ZZb]i!%[A$b)>m+.|>KX?AK!׶+q+/&\\?5j (;tr5.mC_seO0UMwӝ?͚uL=Vt[û?nyyy=ΙJ/ZU67V+|zԣIztҢG 1?2i%츤E=NJN=eҢLܩ+h>+o";]҄z ~Z),mqG(@_jȭCJ 4~i=4iztrNB(XlcxBȤ8jjtu!yΠyZ\0>]!VeXV-YQK8jgz{"WAkvٵR^6DztG梕w°a`|3,0',#z$He|_s_4 ;o+;,OJ:ӪeK߼*F..+LSeZ;qUQl*)M:`s ^s)$tt|fep@4Wy_Av؜vonW&~9ϕw;rSz{Gv;38>::K]{9Ħ+KH^ GԨ'}2_G بD#_`D&v}r,l5mO< Hָ@6Hr}ȕ>]N؛ +I|9y_J,]B JdIcJ-b&_cML$''rx ӧc%]cm\R4>KdW!4N+aJ^9 `{W0ig'u.66|Vcr Fz-uKq[Wqs'~ԅSO]899>nV%sXtJ/(`=c*e]5)1#X!u^jF 2הTD!p5DPycP0j4Qkgc%5fq5Jy]8gR^N7.Q{'mfI~ScQbyӳi]<]<88`=6d bK5"WB|h% YH*p$*+a(ZHjlL2 <2ԦA&W1T V!T}XcFz&|ɸ=wl־m*Qk?';})8mxЦA1EeALQr!&U{Q]s6?i(c]RBY׆)q,)!g1İ0k[!dl(iv%bMYI^EvY1+clVA`&(x?Y]cMbPWR-Rj.[Y [s+522E]➢b|,{dZw`6bԉCEIeOy,Mh9Uf6AapXPl&ʢ%C F.jD]ہƒa9oљ$P?[x)bQ;YZeg3X^kWYK{Ǜ@Vivkx*yloA ABNl5& 1ƊqƋF^ݩIdK.|p:mq3WE&cf_}ТX)xHD"k j[AcPN!҇9FaBX7a]dk4بuQdU#;w!#Dh1TGM4m:XA$HQ+O+)y+DF( ]qXn^@=)(6`1ћJUDrND9V(G1&ݼm6J1l^ƫO1wuoj95Zkd>-u\Lo,xݎa^۝q&??\v dVKy\1$kN]h)Zi1xv?&(r@w@y]VfJ:fJH;frpN.ɼwsv]2SyiOG D{/r)Joѿo%'痺f3BK 7j%C.h+ś8.:Q4eם/izgּUٝߞ|_xs@ :M?'`KNӓ>ca=(oRFDq$np0|tK,oEhLz$3fGDO/??Ypd{7 j\6gǹ$$ӶP2v><:έi9;Mm]s4ۂ?: ֭'wdv{' QGݏ߷o}oNCxo޽[Yq"3pRrKD;@pmd)]Kz_}9 7*#;A^P9xv)ZJUe; 5HzhoBt-?+:f4:mb0Uh48yyYV(2x1,:и3̦δ~ږx▱7;׷)K bT, FΆr} n*e[.$#l.rfbGF4fRsV@bU{h/$J7"0|~.?WGerV kHSt1X\BQ9NȘ*SnZA|r0Iteo[)etU2eeig1z/kG  l(cűU{3_.#M閝Vguz>`[',傲xX(2!b>="B[ +ϝIu<R`ङAnX-O(rD!U&! \Wk6ZjJY1x*0V:ҡ_tz<;߷<4&"3D[$qg"M,^"}΅ǼS{RH1>bm(Y\'A[DXTU̽7)Ovm+ &|pExԧ| % s,MMbi P [JM%Js EE騫\mA>DĨ6׀L.ib iWC]UHϠ\8f4jm#87S7ߛE21~XK\c.#:z$tW/eƫ?KR]K~PQKbZOhG JBSa *ZRS [PQBIdEYH{[ ffKU {oy GN<>ڮ|/f݋/ϓ6+ɑ%5>|hL3f8m0bQcS zȅmU0[m&NJ(wᝣgb̟7\nƓ;*2p@_X97Ѿy$1(zB'N?э\4Odt6m?ݨ$OFц1C{ DA`ͱĹƻ.o:WJ it p'`NǢex_j!mYd`\5[b'GK7*I _1󜉃.*RT 'MS"NxLjSYǚ"H.a;29;9&{k~}χL2*yÕ r%iӣ]ݧ]^?鮐!߹6t֬uu7k{tMm֟X9}ҲJwjz|x}sce/\GG+[]6x̛͢/_tQUl_y笹xEM#1 m0l<0! 7[n+q\X(f;[r}fD,Y:Up!8.Jޛ}_s-ɢ!RkIX┣9XƬco5pbr..2jlgU*+o<$')Yҙ\J,әT考AllFzc{~>1ngQwƖDLcɡp11* ng19l0cꅗAHRn{HdĄٻ6-\INr^Uk.=2HOJV`]e@hw=}a%\ ՙ%; ݐjY>|\uRS@@3aJmrC$ z:@I(FHKTZ:(dd=:wY{:@[V+A;@@mb(^OϕRᆳYu%c,,T.u&^iӋy?-Z8ޱ+@*"H!p:v}X^Y"EqJo{ZN's4Wk;E5^K^z3 {Z{-!gH`x`Z2KC%&uCmfg.3no7[&њ,xZ-"샶L"J% ILC/u]uk<n[[-۰ 쥴<8 }l\$Q zFD-|o8ݩͱБNϙq0n\0@H9ĴgօhE 9%Q8&0% Q*Oz##Y18E1WK) K .,u}Voi̇;DO&2j D80ni 'pK@3'䊝-%ďFx.OF7/OUϦU9}d GtP&2cq5iA8)sC0!fj$['ΉF{`dto\Oz4fƁ+*NoMJ $S)V>^2{+j㔟pMjATOx`q/ΉD//___?ߧH;Οhn0ҦpDs/&_/!-(˧[åWP?U)~qcrh;Wd (:dzixr|XdU.Yub[\6o[f.2*Ċ/ f4#>S=Orv^hb4oHq6=ک}RX G b- xJe,).L"wFҾ Srgg Z%q,3mET,` J6u" Cvu ^k]C+]UThcx7aL9/5 ~V}cSihvPq,bĈ[T =dp+Cܐ#;mEx'4!:e/}`}sFKXI{-X0}]HȂ1Rѐ: fG3;zuv-e̸Ӓ$#RF(H!L\6Qt~@Hp~C2,w)!E%[RKd.s>w*2>l-Bunp aC~uƫxk9MZ{tG7xYb78sl8f7L=مzxuS@EN#%r-iaulWu Z?u?:htE/rF&ٜѱv5 idE# n?}4|'*[̼0r=?\k~>lp~T_:@)EKvuq;5m[ݝKD58+PlD bRJׇqg,[6k(Kꅵ3eNϘ r'/Sj9d=)jlG]YdFtt>io4.1qLHDcec5r6ӢWË~ﺛg)se4KF9/Eĸhʇ2_ G'yɉSR)(uL^!IsӽߌN˨upTg<h@3HuW V1yelf)EVrn3&f5V7tCruh *nF2DZEel8,T>JɨJ2p!k"a3´0,$& r[h𳍢yR dUr> 1ƬamNmKͰs& cn,Vs޻Ԛh" z)4蕗>^Tó_Բu|H5ћK"oSoRʏ/ŔةlZZތ6 DIk602`܆>d3N mb:2}f԰7yIGs_zfe2}.a}%At8oSN} ֮C_Yq~[w ̂.䦓|Jz;YisъF5OgeճբE#6Wi\`?*;܇\ f'%˺:=`uBgXeSKo{픭tQORۆilimC z<}0)_icrK.N=UMVtc Ga},qoptenw|SMDl9c&xhr[MFcv;jl2)r\ eIOvf]htk8t39P u'ޖ'-F߁\>O;^r*Q ""r0FKXZ^hǻyt+3Yxs{ĝFwחo ؤ vtU+2 1-ƌ2 Et@v}>ВPR^wA [)wVwhxS˓jPYr媳MB\TpUJ"] KZ7'yfXωbnyAw=j0YR2d .AB e,8ƵY#sz.*3KX~coEx"Z>`bXv~ +[w֐{mZZwSkFe=MK"Y$ !d``_0xH#G3,?ŞFZI5x.jvUŪ_MO>L0VJJҸyَϯo%0+m8vxM ɰNy?Ր؀1Z7BUn(*n[#{$g@`P/S8_+AuyGrآ7 \2tDP|*]0UʌYՓx ),Z!"AxQdF!D`Et&ezj ^H]tPBXlGXH9zJvz2vB=r;UZUJr;_`,= דkR~m}1y{׼Cv]WqANN&|ijQ>(B= I A翾};#eO#IR*n~Ei^f;%!uhEkPy=r!K~G{#g#ݽDwg>=1;q=: f8~3vߧFS8%S.<[$( RԗZю}vρ(/@E$ȁr֒1+$JYETiUS{dqn]t9֮=|(X_ۮqU3tz1ul~!M@]gH7 r!k%2r:ŨmԂ0,gNGo+I^aOcKDIz67flWg`ǧ''2z|ٓ/M 8E1#|$fi>=*0'Z-bZubTYJ 0~{)>vv/|Z(?ks,gg~-Q13f]J2uʠ6PG_c i\=p٪*N0VUi/;8˝'|8:͟ǖNj̢/YzTYLNWRo9vYlBycK+i'HbuL$H%.t+XG%87{p2MrWv3G;Y b`K&;8Nj9Y_H)T?,j!32Yt 0e}C*:AdRoLaOWEvcL?Ec(F8zY@F1mZ2B&TRVe$6F&L7BZE~`IGK-m-I[ mmzzucIlvlw9h{m*L${5O$}շ3^Gg}gNc>2AНwwJՅdbS^✠1uaR6.mFvf쁒$Kc>1"u`#A, RW82NBӀ+]nzD! ]6;*"ɻWaXʓ9,>ĹO.kx~n6"]]$scH<8Hz<l^0 fīUpx*e壟Ӻ3xNvJn'VsNvRnۉsZxN뮏^(\Af׻U)X1bxj[K}5Ls2*$WV H.:IZN>2Klo>S@::ȮizZo}̋&~Yך|w믾so{JKFJn"/5si+N v*(+ŠΧ0Dd @G'GG^0m?xSKY.)/Sd$^W1'W G`$QwQtϵ4mu(t$@0xE,xILE-&$HcCS;q9%C,U-klk19.zd:H; P>VUWg|}1!'YYVGu+)%yߋf!q/jvbO&~I'ݰf0H=tT_LI8;Gfsv{uVZǥXJtAϵ|6W4ٵ#a|ljP@!z:)?Nމ7-{/՗ou퐖Zx/n-ruOoXA?4suד dACuZ|cg/7n-̱pZ,VM'ӟ}~=oHJ# ÐfSY^Ѧ<=`źL>&zv>308*nrhԡ{0Yͅ+kVzYIubOLNZ(Nt$;`u 7߿߯}oF}/߼[^q30ϱ{6d^<0ԝrmOUדv;z`VPs(]"#^(&iuxUnUWnhB,ՄHn|R Q[Ǖd΍:ޅX|]7bfk?2 < _os"ݧ9x 'I}ҌXcb@Tl$h%9PP@U0BMXF +.Fv)S")g\q L蠔5L/:(1qNMYەSt⎧77;[wuO|^O@󥛮W1Q: fR\%taWr:[ O*sb<=&$&"SdpXZ|E'4QzkgL\zǶ1sK-O&ev!sUP".L-#%2M=/e\9+29?_*bEvN` AA dae@pTHF`-JF(2r;F5 \.IBV6$:*M`@pgc| \aAضz=/ J$9 j Xe$ňE[؜% Y}5gK;۽_Ew%md5gRL.i'EZ {֔KSd*$EL& th-MsO?((0d ^{E 3 ֢HZ BA cQPԊ',Sdq|{ק@[Vmm,}w4S{7%{Rbb-Fe&HOՈ"d(ĽVQz{#f14txE7r"8P͑֯;M+0  C(b0F6*`(\x" EWciǒ>mw-r0+2(*+RjAG In"W\JZeacJ>=-E4ޡt.!.a5qv 3lM64.~>nmm֍x:SsߡՅnG÷,}nqmuqj,pnz f@Eo.=vrhR7jn\B.~$ `jPvu3o{tK6#:6f!leXwiz*;rz~̻ybdy#[?yuw[G!Z/m^Fqyg\HWK;:zn~$nn"UrEWmI\k@PG(<{ama<>5}nNXrN^' 2T1>)0Jl T4E̎(I2Ff^tQ;Ҡ!rcJ2Mg*v5qeR:|뫛f*se4F:Ԯ$:PLjhe$4(ySR1 LZ?BҔ}ýߌ ek<J'ә%)%2 <Ō0漼=q}9ef)EVrn3| ksEPEnh@Bהroc”b ,?= nb4#d`8eJMҼHRkoOU킧#ZdgWNs&+\ʋʋwW.9SF>fV.}`CUhbV6oa~zM6|AzYKUwȂCmMIrJu9ZAz-[ō &2@W\R*$:"UY)_X1PA=Gkm{Cц\;䢚8c+tBxC8fJ+f_ ,SWM?g7IaV>q4hH9CkQ6xeriFl,^C>8 'Bʈ]_XN\v?\Kn޽0kכ^oNRoJ9io䱂g.VuJ8,\d|Y{V1/x^^'4W^1uzuqd╕bc ;։v-?ܘ:Wmlۄz֖Z6_.3.dݶ+:hc"}eoN=:-~)>8\yܸwS|XQXlW[~>l]9G\7 {z\4;ܼ0p-۽Hwe}gZӫv|C0uCF!UM'od7=b %4G>28A \|a!ƠK"D U4eKIܿSg8__bJd=d7GҧV Cߟx_mŽ?#+ *B"iP>ɠwD Z*bѢ0BH% Q&V"COzc[ \{1*jlWt9i]&KkiNΝ4WZ2kU CUb"6XV-D32H0AF,Q>0U-&ǘgV9l̒ 2=)@<.f"mLk8W2ږ8;zڈRb =m!ʶ{[xpKRq:so0N'ޠqvx=q-Q*C tgcY+@V D`!s0&a݌mVfi(E$#)ٔ ^C4D,AC,[;c[-q[l?5ry*V8ojd[w401f"*JaA="C%T ,y//RrP7yVڸPiEˁXt-R)El|*@:*jl-_:}x*"kҦhG08.eEE T >0ǽb)L䭮jrƴITֆKPbg|BIs# 4qM |]8-¢\tYmkEe(zœ6Y MhAyFw;C^^ѡo4Vx,Y1r) RLKł1KiQVgdRQ.N`\:y>r0:s/}`ځӨKQ\gdc]~golm<ۛbAB) )1QSYz}`,ALWW,;E`gS \ iOj}+e_=Q\]>XztY@Oyd/zʴ|XU~|>~sF ACG>xzRsU5gcV*R"=K2\˲(>aPx_krr)z0 p,%&O5_~rIK9"tDR7FD(3rO2@ӎ$xr" LH?(*58am9y4PGLʓRkm#GEo6/[}9,eY$m!qI-$[vܒlSw8EVUXţ/߲9h}&ٺ,3dF4:7V]1V)YǢVMZՖNNEh\d7k ң{15U(#=iSN//oo?E㡧X*{9N-6,U$z):V(btF%. @@1 '[lV#gǃ@B#bB"upTgyIRr-@^ŭlkJ4+ڇPW+iL+crqVL"š٠DPEnV:.*-Wc4efF$c LK-!Kkw-PƓgNjY]%}JW;nX[*6n=!ffonk/hxzEd+ɿ@б< MA4,OryCԞ[ohw'_ȡNkSq$Zrc胖F^I+\"$U5 D{+AbzZ'V[><+~F͉1]%v)1/ ,T.+ ̚.fn K1IEKmqB(I$%RdOrS shtO?|;'+=>Z'Zta0SDWjy{E>y;e|D2p$GN,XP|gx D SKjjbJ$Z6vƲO L/ovд3惖;&- >hv(! $޴t>NV3MGpFai>ַ7P4'kwon[Ƽt7R-ӹ= k;{ߦZl rĒ.Ķ;lxvi<P^j oIgL'HSZHA`b6)oA%$\jJQ9v᝽)[bfwz<]]~uU5W%w85x%)REt[?ѠoSA:$[k}Z+Ie_sC)!mܓH9(Uo/EkrvOI,,K1!qL2GsNz6Aj WUÿO;۷᪪9 $ҠP!IT)p8b+XdĔ@dUW53A&VKl#! ܘ2uP%fZ"ήSbU]˛cq[kw[ٸO,_IytztȭRHKݮX'@rV^krV-D2 :"OdiDm4jg} ~SU3jkr9VZjkc\Ih<]ѧhsoR\KϹII֌١}Fr]X3 UօnЅ{<_:A=qnu{_ Lf'vkl2(9IB:fF@ c1o 9RYA9xWfh`EmJmPCpdruYtƔ[fZ-rk05Xv58*~+6ɮ2ֹQ)_zecN tGRGJ$ԭ$6UJh92T11"2IG>Ĭ"Cr>Fv}9E#VjD^Y#A#qc ' SV$YtX/J<&KRV`uU,gL 8) pI\L-$472YDהp@vjl׈@z][SA]{rfjg*)-ݕƪ@>rYj JW8&,T y` :ioѸ.#r/a5BIQ& WFkt;dpb"Ι+2Zpk%FS(}a{@.c1[/{<1Nˇ4̊i:}:~&S<}݇V/ͿӞxc}7ɛzO/9{_¾&}ZOi/'%rwrU_.lw g8ChrR؟Mi؞2y )4oD'A vH)CC&B7ܔR>Gfw?Hzsx^Z92:gAPbClut*pz=lAc2::Եfhב#v@ZBZVZeF) $W].E94Y$рdqFA ~ е4]vezQV`ª|68S:@A*g z<'Y/E=|vulL,8u-T6M0ji#.Lz륄ޠ;tgR?ݎg->@ Frg"(Ȝ:gȊaj޺l'>ԝ. 76Y42_VټطBƻC.5c~3eF;K7mڕS*g1Y͂L& b |ȦVT`t\erļq"yoSw9I؜<7FZzvjlGq W<>jMn=O8W aJy!s}c#O+ҝ~L7K)yrN {wWbQSul^Nwє}4}BݛJ\8/Վ>l{ϴ{.Tŕ_ ?]ϗ_fw\/ik+ڮF.=NXL {gL}ӈifY~%i:`Ybgjۻ93wV6ϺzȾYgMHڲɑV2R&V>]aMYqOh jAmTOk1x<9H}_~~Ͽ~~x?OdqD+0OXHp$%5eto3#݉IatoUG4eb)q~*mG];U(MQ\}('޻ܷ!IJZ BZiaLb;tG 5^NX{وioH?>62؍=~}ba}oƝT@!/‘`\2)v CȢD.$}hJ4|=ƈÑw[5Y2]vF(][o#+1098 NX$,^1dx[5jIhK;5fW"_//PDlSfSg ca.d^,MYMphJKFCD5_:PT-h̃iz@Fƚzg0> ,/=pVx6MOM>эvp(pJ?1K\`?q|OOZ0KKJM"&hd`^KC@ZY= >g_u4&'?t< zp'k,i8t:74[k3kxs]~~T{/+LZ/ޔh ܍/}Oita [ ?P{E Bj3ٿUӥԢ81?S,Z,qTYV%*l%$U NXH_Se~ɤ(^zl'ŧ&bE Έ"d=KcR5bJ1 -34q,VjFYpggO, 22㒉X̧@5^S/ 1eO;׌3 7)*2P#gJCF!,*$R8q{Y1qvd-lnPT|,rsTs"hgB 2 ('f=:Й@I+8a@+yPIF2i.r9b-Ԅ|b |ZŠV8Vvuo@Q1jGi3Ԉ6TKQk+82KW!rk:Qq~1x#{`2M+p|"阴!>23M&ZHT)iiV!!nrBǣʖ`p_Sg#eoTAZX}2C DL9m%H:DڙG9󘃱* o:isAƣsTKQGؤTM/W%7KHr'DsE qW?uλ 8vHuT* \IޟuTX G<{?YyfdZ(d~ PG7 gE%RFlʍH/("E݋88ۡݪCz# Mu</\uu{ to~;ssyQzS~8ON:::ʡ%y1t襜} #H20=P;6dR>JG'(*UQ-W!F!Hc, МI-$gJ:Ag + O /٠hp#t; s(c?6ymL3f׫!,H*( *"FyBHY(M l>zw[xFiꠡ%`ɖV>r.9P|eWox|"E3 6Jp-1ѼAc#"ZPw5&&X :jfȴ0Y^& 2SJ<"IMŇ(!j# Z:Q"u;˝^8;BM}&e~p7aюx:[ks=%ma<* 5g3cfu3 vRٚg Q0|:Yum:ٌ?/:uϭip[:ٞ"q4e-F[wv^z^jYrYfW7,yLjl-7UV"E[y )Gs՜=鮩M-q[hakdLWDUF*;E".6?Ll.ʞk ?ֲ,.ZeY˲\weLR),qWY\N]ei?vwsW])ƖgwWY`s:*kOՔR]Gw4DLy:Y\u2*Kk]]CweD s2*詸,>,{tWmsToh5\5'.zmyU cU^_~7NM#f_Ns({^z)?7VyOMgq=7tRukvoM_@8b4vpw$%vWOUdqWOVøI)q+w%;wWO2G$icwWY\N]ei>vwԬsW])& zmwrvũV~<q"g:343XHX#VFj]Qcpשr-=!+pW4>ZacO ' rm! "E}'KtTz}Rc*rx?XJ=+ǃ!$NpSTSh.C=**!!p7$1i\F;Bo>|Bg:Gf28Ϟ1{xyFINmF(ם*p2jDbJ8+N X=U ] j$\z* ( A@:]Wevm j3|SShr{}o t%Қ ;dlzSuQt7{%;&LGpGm`*Qfs3DbKĶ"ʭ50zQnZy_j2^yޥid=SL;]Lt!I{%, /{Yj0Knn^S)upn d@vLjj)4% Sga5,sev s%,ŮlXQQ,:AOL-*UbS64~3݅,4ۖq٦F?v[Y]u3Z9xGcYb@:#$hƒM^k] VvËF03M`ts=pv-+478ҎvJ_jm*#)iY]O t*R%VI`^r`pjrC.VzNs1:#Bb%!:2DEp6h)!^K<a U[k4Okzz(p{ z 4X{y0  SW"ԐCnὫg^}y0:hXYKJjӸ\c9a3Bw}Z׫G cƹJ/!ĩlimz|rzzJOhV jS!Z2yDYJ.;wH24^a|>Ϯ |PKTߢe@4=[H9"ZUYS&] hMEi~ɇR9n %u4qZ0+xk"c B2*Z驎7Kĥ-jӲ K?: qi]L]`xP\(cJJ%"j2|QKξJ X%8Ň1IL8فx;F{(tU[aՊu8jGY?12v@JY*dqHuL@(rytp3T/Yb Ж9UEH%dҕמh)"u6LWNCh=6Mi\RT`*`d(Β "X VG5%vnC4\)/*v`qeF3CxS6l=#mwb&W #.c7 dx'^8CQ"E<K +cB:`aE-8X*b*:28l%4&Z3&* [p1q[Zx4NRŵ%=%c qEY(4O*Cwk}B[@|4d)P[3(@ 1P rR!Db:Q"u "wVz=Sh1qv (7c3*q$qX\FALVׁI d5 {&J'I$Lk"c\ hˈU*dd8-i+"$4h$:1*/GL(lnp9K",Qfrpqp,NjɩLFzW8-͜`:g_ZJ UTjTB_+o_Qh#Z1x0 4cmJq)qpR'颞QbLޓ|nY ]04zw4) !*T Iabf<m#GEȗ6Y|7rv,m`b,ĖL~֋em)rd:"UŧŪEAtat~\eȸ#fN[f vi]'ޠӏ7UP1 ە&hsbϐ RJPDd=Ic WE4Tc/fᑂ6uhu:;4lA* p_li:k&h޸cOAkIIˆ* @&B."@6&*eS"0QPJ J^a1ELH2#CيFE @ RQ|!\LF5polևQL|XhcW({ֈrЈF eS"B t-E kԌ2\V9G^QW:Ͷ1"U RL &KV:E-i5UtRcMSֳF썜 NgˌKvՋг^A/zδdK2mStt0?t6HBC" A4ŇЋ{4/UM> Pc^]#7z06b9n~|G6X\]rv'n'ȶqʾX-$.dK #J0:Q9+2m P9$4[U:K=޺n3 w,9ܷ7r<]= vVɼR2ƆVy#ۛCåLo;;^˵%/."bނ1Yk ]DH]Pa:U<e 1ʱ0#:폃nRwիQyu>"t1T3VgZ~o;]*,[*jɛ:׿I7F Vm5Yw)yAY ?9J<)ߌn5xlTǔ)$<[g lFgfqo4sUdD!(EGC_ׯ^' šM/! )˯'ǎoyJLZ ku}JxۋIna3QbMV)C Ίd=T` <MċZaSM]\ߗĎ'7;8y )vХդt.xt ◞&~ ¼5~aVX+AD٠)RF#_ OGU-I0* ~\$A8WT P4@rQz:ud_,\rhsZKlcVjj($)l$)N>9[$eOb2. mWV= M8KeGJ(~󡈠KrI+B L֠bCëΰ!s92{PxY ۛ6jk=oN9;4Q$$d%H7},>[IA*lJLFoѵ(JT!Iz];EK6PO伵7/Rk)Q}Rt J7Xf*ddHZ]*H9X }oz =/VXY PB"ddSHc)Z hS+5`B4 j`' v nC^63/,3`1e4#FxdAFe1ыثͲx,xeq`Y..EVvi,eЉ,aP>ڨ#´0 |hEMMXdGპ_Elb%?ZP%)K%9 ^"#\C>K  Yџx^UCv3#:A/g3/I6i90H԰e@lD&*iKYF޼ Ҁ1o/J)ECZ&Dix5MFkR>sK띂g`m~*!bu=0}2 hMdnfK/ެאdQyPtA2ʢWk0ՔgQc61wyؾ" Gʝ ^)) ۆ>rãS؛o%3oyxVz˛ |`6mq@GOʭQ 6u2RH4C_繾P]o~YymmO|U,<c#M?ݏ`'~/~z݋Z^eQҵEK)!+V9+-zv4Vc|͗BxIK[&Xw۹G|Wen BY#^ ZݰvWu4i~2w~s[c`9E-%;\?Pg^sA 0XU8<17>M/W?~ל `7RK`ܵ5͇֫J=rTLM~/,rUJ`` r-heL1'j `_QŒ1J,~IvTDDCLS`6 `)I)=rٖcx>ݯ|;p+uwnYtKhaZ9+<_:*}M[{l:P]/E&#IDY|&}eWHYYE^\জS<o/»06iZGGTz O2x,5{)s 1O䎱F[ݾמ;\[z`c!c3l8"tlh N=?ƴG'{Ҟ;PetM cB)lL)&XcM$7+>tDȚ^e IK2Ɣ'ZHU%Šb0#.=2+ԢR`]v?nݶϊ 솮]W=;IomlW>V tw׵w~vdfuí篭y뻻6tjDǝv-n,un/zjf=Χ;[޽;lꙷyȶFQuC9ټVź07k.tHxp1 oC>-ZnmDsuYtxqf!d&׻FXm )@cH G&#eٺ!YGZS),Y^!&Q6i D69$sFQBZ N9HC6\H:HRL0y:ȾV+ٳ9eR1~g*s΄hlIĤ%Ń)|"քgR Y 쑤z0X&c @H:ҘD%D4;mBVZlΫ B`aD4''] ,ꞻ/|I$ƒw%x)4Y:~ CǤrgiMj1[+I6iDpqbZPh'Y-+ 2G2vN"I{YC*Yp Z.4Xn;/{6=!TR3?Z0 ξ+h i6eCRFM2)4 $Ԑ!>gcyݽ7 ڮlq|q2ۂFXo,\%z F>g^ϒO fUzaZ Iu&REe֨S%w-|-ŎT$;+ N4Z>ѨFOh2vukB6ht<ɭ\^}n28HSiO/9*#gZȍ2 @Z"oċ.p1HJG#yFWyjXzlE?XC]n[)^w{;~X?ʙ~vqE.Xz3NYe3lVݰX{4PO2nR4)>"tZhQh>E5}Ee!BCKn)/>G)M[gŧ{W`YzM@ګ$Ρv-ŶGZOfivUɴ+D\χ5r6hidHHb㌒ Z ^%ҫt=54mEYU=Y?3ձ hSz<&\-`_> }*a 2^7"+a &0-u؃5S|u Ժ1r"29u :JD Y~-H"j-E ԉLYpbYD)i"qFfUFΖq9}CpIME+jF׏>sWfh=c//mV3 wI/TYhgiMF[ov!jb׌,d"MTA* mP7qN'ljOBzBoˁL1 $TYM,7g %@rV]nxo)vEoY>e)|4~fLzLU㕠&ibhb\c1 gYx`)1{4(5yg ;Rߠ4.b A L%+e) r.&EBs$pJ8-R Gv*{Lw`jq`TWQr$h́d H*Q3H+q $Pua'Ɍt$ ;L*[A+^ cĔ᭕\QdvUZlQh#X1Bj|rDz5Nzy$=##1yGkrV D2 D^0LvLR0^;s&ӋuF?TՌښc@@mm̂4 ;ڜ7)j\III֌"3P5RMV;B*BՅk/. 21eF NMnig'g3ٕ*dcϐ Pj#c1o 9RYAĹVdIP= GڠAh"ێYA1f YcQ\11wEkWڒZGV mJd:ֹ/iT1@(,(>J*.Kh$+E"11ԐJ&1Ȑd|3jlևQ?UhjqW+kDk^#ޚ4E#Ce SV$ÒT`aɂ%/a+Ru:Fml.@EτʤF$H4ٚ.ED:^Ky^g5.^zz8'-LO.M('%1ZGEQdU( zzqWa5>>|Vwq =A~U6'ˊaé?Tq4 ^lHWYo^?_)M^񓷗.NƟE rG.G~Ԯzj!r54]\i&2 $$]Y YO`_/3'@ІӅC$R!V%ҽ$ttta3ŴeORau0mz *5;ax4*`_SG˱/O4ȤzeýqxAx0I"_BG'2ۥ}cKl6`Fr# 6`y&ԧ{q^4Ils|ܳnggn R+#3:e'`F$2h\qڸi\xyx!Rmr:BEB^*^zȵZlgzy|x|qz>71Wgݖ]}{o,;K  b8>9`l/^0t6 7ȼ֘ikHcX.8vd.]CkJ,uLw<x=:2A`ddǕR6U$| NDBJYMVLu 'ڗgx򕪤!` Iǹ JpFpIXר9[ݥiPp/'(pu^)zvzi;TVH;< RA^oꛙ9 0$àV0*Xcւ82EL3kAHkXE%6j%}㳓^gϓv)G!jPZmԌI}2!#ÃbͱY€'FN[nd $DcB#BVV2mJj`މrV-嬕/$fónJ^9?1*YcʈLE$$c2s$a@J{)Wυgz =X I CH:x}Q'@vF"N@2h.V5NG},-.VJ%xn-FP1Vpǽ)3.1 g{QEqLd!w]u!f2]6Jf,DPbf"Rɐ /,0k/բz;"D8ѯ^Τ"w>LmP,BS93kBe J6rNs3 FދG=XJVC*vǺPmC^@=iD8;$&Olb[`e'ő_h Mn(utQޤ0^%)s㼵L5&hU?.7byS  \AZ9TJ4 t]+l|M!=B@K[Vߦ4|dޣl~s~4W;XHrűGGO `hx*_4EFsJNiQ @P"8: >ā@ɔ6!T@Jyb!iduGq#4)#0`r0JV|fBW2EQ5r6#҉M9lw3n6ήF-lX(e2#}1KyZxߕ#FLuQE e晁^i\ 10*;CoĠ=Mk4t6ٰ+2pP욕s훧og=t/hRѼ @ FcDRBCF`~ЂZHe5+ho6& uBd)P9mPt~@Hp~} }W;}$mΖ ]V}sI䭤MJ*Y9u`jPKfi:G .lvu6eC-発e͝mJo~v?3o[xZY:`f=)ڵhǟ7ֵ6.<Hk5nk h._E ^Rj ĽcY7!ٺ$O TcW)e]L >\.G1㕃^4-^v ;O@5)mG!R$Yi:{Z{ qI3 "eJzɴ`Xh0+ds44qRҒrtLe==crgyP>ン)ǿGS+-jN;;MoO~ӛϷ?!{_}V? \$EaM!(o'ૡ׎oxn)&uu}Bz(t2uy1=( KU>ϐ-8QNw:u2ଋ ք,'DX]Ӻ滘:];q6Kت AdGA{ Ƕ=Iܝl 62WFX/gJ{9_)ژ.)C@ckc׽z_FjnSLJ} odE(IuXɬʕus`^@E9KRIǜ m#,CȞhѽI$c^GP)dD썊1 J2$ cd^:uYi6aR/1",DDꥦ0J=)c"ҙiv6q:7Mqdk=Q]Á]b - LhΟj_Zy?K[C s4(q뼑4"&L'XpFHjfTF 6f^#Bi.7Ӹ !(iGlc$@{@ 53.N: :h*|Vx>~!#iL?Mϻ"BukJlZy:YqTp86XBBtFyd&F5uQcâG=A?"ql#KCE F "n 4H#H;Y +2pLE>,9[NSԖEjwWtP*2mjLnz4V3Uko[!z!঎y.\cɭ! 0<`!"#a2pAg=O6v@^޹f&^W<EP%J@ETF1$NG+;Bv3j܃^K3ū gwMxbyc1_R/t-(Jn>ծ]Ru6NЩ4BIɆ#!//aN(EKf-D!tkWq`zvSkt~.Gh! bTVp O +%j ;c!2B/ane6pޤMܷhqgqhM>> d%q5>8o %) 㼽8ospp#B\QkB62|U{T5H'Wܲy[ q͗V[((dQ}h]x첨w=MU|Lthuʪ/R0/YUBYJB1+yF)ts%?^v.}qpjpi0Is8ly}^P+Rh=(е>Vjw1YxG-SlW5qsEU7 7>a~z4J(R$KXWQ5g=9; ˫7f86E۱Zgm e\] _%@I5ٻR-G۠xk5) ^_GfJ[ |2g ]c0UKL l2c)翅oY wi G56%. =2tv#"gcW$؞_ϐK@aR$PR'$ HJehA^2(Jn &,2u}B.O28'YMSqyJ2u.OIJzW8<4A`US$-?z$=\B99/ u >BTeÜ}<_p0m88OE)&ۺ4%%R\^rt.%ְTf͵TԌ98x 000(Xqo 5lΕP#=)Hh@SUyȥ=ʼ)77[sJ_/-ToY@wdu?9y r'{.S?{YO5ۑtF4$B'ɁStu5$%&b49q9Jvp4)?0\=M\F WOVӤTGW p%zv U\%5:q5WIZvp<6;yWWhdx`"A̹ Bwf:j̡wdS],בzOf/oֽ={/$xp{SJgDM5*Cz : lp6HFjRz\Tknl0鐭w18!l(+̩3lkKNίnZ=Iqs O( bJǎ OOo (D(kr --% (XT<:"gcN+b )&HeTRʽ2VndI8IYf^)ʘIUg|Clui7|˜3|&b@@ܕ$.xqDDGÅckE qx,ކiM443_jj0ج<`%o0CYgoE4ttڻQA!8 J c`i0 oEOn!:d-%K~NP$:eJ;A[#sqǵu4$&PC AHRVa S띱Vcmy-#chn2lt˩@n'_oƃU4vn&|ݜmNk8mGm{CrM u&nlnvUon:Ktc!׊66IWޘD6g 8곎2JuCw*~eo>no]׽j/ƏCs%χIiu}e΃Fy?XM*&\G~NZMSm~[t}囎 2ϩzrENc+m![H7hޡ8nޮA ~kY&n41bK.rInO=|{c%j^'σݨ#rY}qvUwW|}~uvUTϪKl6.xmemebPLv̌1-b:L<383 3R9CyArE{LI8F#%F2*cZ¶nADFe61da6Ϗ@P׺$BaVq75d+PhsՕN7X1,`Uygd$F"IX q S +P$sPx\J7Ká:R")9IEdZ0@ƝVc +"XAPVwmH8`;\fv3`2,YHa[-Yv$/- qSzl? n+-CqWXdF^֛:SK (yp)+YYdF3jA 4GZӳ)^scmM7$tD&sfZƭ V>~I)8 \@Q/G* RK_M1yH1*,M%[!e&bpcfךK)HH_$yqNк0s$>+.sj^{Lvt24[ܧ?ovnR!,&x[94oЏ#iʨ3H=ᤙ'>Q4}pŐ,ҙt8!PSLp;f{H%,^ ^"X >1g,*=גi݆KT8(=}bWϮ ǾU"[PX;FKp2m)6ob8}XmQm.=S7z0^\ku~C{ӓl 1Fs$hxp8_[.vI4O?%>>xO\QSmIɍ3)~MӈqUfHBxɼ?_'bgs'=<z]?!7ͺYqVdV1BFĚoi[ctJ?6L뛛Os~x~yCbǿ:?PݧۇOwVd m ݛ_C[Q' SyBLZ k}}R>wp%xr rW:HF1{Gˢbӂ  ѵp.:6| ƣy@Rlo %W/.56R[c6JȾr#);GMȋpd() B`]iq^ #c@56r{#鱽 Ҵ.z!.,[ltFr.;kX0 07,`T 1u=bP'.9؍ ߜΑX}vv>bi("C4)c|)BF9˃f TU9,{i1N8 oJdI(2 2d$WjPvAϱy ]9 E$O SFd:"AǤSۨd6I t"pRS7OLb(%>hV*G+ k' YDCjP OFIWNvl4Z231ZAӶik$ZbJq/Mʌ+A8(:ۯ/l3XE7٦M"B*ۘ- (24B 1-o2P;=u{DV*ҋUc; _VM.nb9=ĜvΡ@8r\KpuȹQ甽g,Yzc˲} 8+KNt߷* i!r|/ ^%Jn=QlAloRHs(stMN4ڲ1 E4!)X`ujCR6{rN$XB4y2FxJGz>(nx>ƪ8Yv+ΐYiE0Fp}#bPT&r}}0R?mD£E3[J"2d3fo>Oa%RȕKa /^`^b%9)IaL9^ԑrGByLC cu*K1*2MY1mDPm+ߝg[yy=óN֣na?^=nXnv^Sr3o&Kl&0koQȟ_3pt5+pQ[VoҵY;үͦgհFsulaM>O}^eRLJg? CCQ"K]%P&۝K`w;TUy?(IIi+ (F Y4ZL8@3\Ą2U zQ i撆ޥ{ *&:a֔反|=;>z>~]l-\>zDS+kqƴC-B[E!n:H'D2e]f24;8sS_|(=򼇓vMəҭeRCnuv6P *sd*Dp%=2$s02 &:Ia#ۄ6j^:oҮ;q\b/4Xt[Ҡt>_xpBʶvȽi[-H߽ozwxf?2seۅ[˛+$Osӳ wjV.Ѵ-k/F;X7i="#\ .SYOs=]֯ 'X@:+vڦ7{Δ!L|𳏧bH& cC=x6š۪ A)`AȜ&NdMr0TuWJ]Q i_[a^ݬպt#?=9$}>F}1+۲5hMրn\5 2(J6tAiU !V4Lw&9N8}Cr=xy4 乵Bh4ғPedvZl8t29ɛ-=[zݑZ[2"7y][OcY*-]ޟh|4~ԖfFwmmY~MKu9u3$;`H򶃠adشd{}O5I{ڈ]w뜃RU!PbB-#1N{m8{-2(AǠ< KyRQL9)Eሌ1gR4rˣ6C]\9F)D;MM~)w[ %QM5I,,GӠJm)=2fymy{[іD(8`'srYEt4żw\Pa G S`9"e1Q3 E4Px#8$*?^k҉'j18'j5(K,ǜM{@"']3Ͼ4E7$S_ ].8&Iego3is&Pb}{+EYyAh$Kā Zh0R ڦ E2*#i#8eH=|*$ZAV R6!Ifbpv3c9R #p| m>NSVB-.6iԛo[(Phr7WpPifڕ%+CbI mQy4HyB!V3HYNH)XSa] سL\*/IvD|L+]a݌&fb!z~+ 6r*PD:F6&$bXr.px;cLxNSړϿpN aܒzQR@+RHdiTK!Gօ$rU ~:c[8iaF=#mħ I+IDJ }>5C,u*x5iYHhG8T#@dvy QFI{VQy(̈fĻ]U6u%"+̋Ş鏒+G2,)j]~Ӂ)A}h\ϋdq(5:C^q|x ۑ8n Y#wv5xA6(wO?Gah1Jk5V@T.(_ ҄4*9xT"yޑ#myGڎ*婰1$bDrAtre cD{I<1I|A Tne&P_[h.Pϩ0W"׫/W9Ԝ-YdWagKGKs.Lu>ʫ;)B>VE OL3Ub"o0쫊|2UB-%(at\`RxFLqND$4BhSm{2!5w&1>E>qf^=ޖ_f[ Kzq  Ag>g\t%sF+Xۃ>g}7Y2ɉ]!`ig J ]!ZE Qj%zztւw!QW.U]C*+͉ѺCt9%u2\`]VUF)MOWo V w\ X90z?qm٧[pX/X+s ?l2_ ؚ#윇 ^{N/1x.iJL I#\F;cg;vK֓!ig׫ …/cw` apMЪPiJu]zʕCtE:CW. ]e`NW=]ER9w}OK٣ / }ܐ9n]?ЬCw0Kg +Jv(5 'N]e/v\jv( t @J:DWXCw WUFZOWtvJ%ʀ ]e3 s;2J[+)]2tg 2źBWhX{ۡ+%4]RWCwNVdBWm+D ?YJK*uNV `%f0U+Di2JoE2oɹ@TV (yO{?ѽzkʟQSQ AmM>Gq8eҴb &X2)5yފOחcUv$x(T/ R\ To*#>Dg,IrBV?7/c,à8N_@+BXY+帒aV{EK<}/Ù澓MPzW#P%Z3+a`)Z䓾ɏhg938kdelH#<Gz@[rGcg, Lde2ޥYrzϮD tu`N^ap_a0\zC)ZFW=]TQCCtKp5 ]Qm"]i]&WYd$0:K~w\Xbsz-?v\=g}[bmT`D4p3 !D+r^d4Kr;ak-ᡲW7ۭ<,>oS46zöG {}xmT1=!>@H́9=SBq̷[2@Oj͋A뗱d2omJ /U;~[p}ʀ?-~|rK2_#gCMlQP6ޣ.=F(kn3'gp:I l5X\HּK\cIIO> s'zsI gkrrLZ9/}mrr>L>ڊ+$}¸^dF6DCMc "Ua`uc/uΓvޒvbg}* (bځLJvS^:%̠(01 sԌ ,G "q20fUVolu辷޺|;nHC⣗/$1@K}VO{lbM:OH>RarJlkʫ TJV,Ǭ3Q@HAys fytqDCJ qC 4d_hB@`2w*G9K2_z0v7gawëF%wO|oogfDxZ>`KƷ3olbʉs)bp/Bc,:jlJ.IgCeR =!l~zH==6ќh]^Z!,QDZ„{oyTVyx4⌌G3OrQW/z9|a&.:$58jB&`:l- 0`"جX'XZl!]E-  ?d!T^x'cr?=R43TxZy8n5`wK13~FM̰&,QEKI7o1bBlr3ί? ?yION\M}G.,\K>88Mf7qV_ϋTHu>qߣr4iT'؟X:R5Ż|J\Ū'-ak "15;"9j[jH{e.q.UW5Li&&YےЦ=K-NiOK hzoCZ>̘Z୷m VzݱظqeWΆK,xfH~yFit FfJJF/G'G:'[R[bSmߧvs_8%1>ru~YkM'6/ji.*`wItOZ# YFw$||. O{1QhI֧apwǃ9<Ɣ 9S#dS/n68; 8cw!7UG*yZҬY4mf9.:jGѵetj_4J2Q\RXf!σE{p\ _BLATYYI")G$͞&^LհQQ4Imgңd%gɁDHQAgaI<7>he_ASAS^IdqE,zt*i~ =ϱqYd84ިJ< QKxTdwmm򒓇FҮڇɦj%ΩV"kק1$%J֐I4.Ѓn| N)XѥI.ly2$B^лL:z6 kWhg*Q܌|#zr|zr2/1מ|[Nl`WXJ")޲<ŷΧA'}x0cYuj&.%:@vRA, ʗiP.SȠD?hioPZR!*% L.L$bLW9j_SiB[8Z\lt'\prf+UHч@1K("%E/' `fhPnT_|ijg^FԫN(1 X˪W6f:YB!*^$Z@r3 K iX ŬSĐ ʘ1:s(M)0E.6flEowʰ4\-Z%<?Y,S_MyɳLлqȭ{H}tl^*j5b&:MNYh EV-峜/E. ئ8[R*!D+YKz%UOM\)mNF +֚95c;L6[evF]xY󳊌R({w*{x,.OMn^ LN\c+R2s ە:oRb"wUV0.KvZt /C5|)jS %[ض C\DuVldL}͸cOmGZ جaC5 @G HZ{4w AG%,XEAm18--dFي*3TE41Z :?!]+;o}&܌7WU1FlՈFF5&/9Y *Q ;E"ymTQ3` [pI83M5"9)ȶq"U GH!:&YȖ4ۚ[B952zqMOf\r["zqԋlD4- ԓZ$QAUH!i aԋI+>T>⢈dGn>cهQ,яV 5~.&Ξ>lA.$;ͰKYX1~׽{(.QQ܏x"RJ\0JhɅ#*`D,Km*(IW 2ːd tz _3! "'Jk݊]z|z4lks_ޜ6j$x m*>|HzLi^/l^0 fīWQIIVZ$j-cKGKss: Ů{>-wJe( :PW@W݈fGmAzU>SAtL^[c0I%DO2@ Sf.Ej,*9$!VDΪz[w5qϠV݆IW9My/5rzr}_*~-m; iBG }w_Aj_*.:Tx0TюYtWO5|vzl7YM&Of;"GBZX i]y)G7~~lQ"wfwF S&FuVVG^vO3Ryrb C$>,V>ۚ]JPJl폕E&q/wӚh 6l{u/+7CWpHYڞ`F̂ GI'UT`vs"*^S< o]gRw4+w }ۭ{żߎc7>ݓiXɵ⩴:+d/}"_tbd5X"ء ʪNFKec^6{)Jɂ QtQ:iQQ3 _6c֥;u*'?l:i?f5e얡Cˣ$r#i1u}6]GfLy׻ݛζ zqf8l^~@;W57vv~3_+;vf~h[]x̻Nr-/\sP_d2 W6A/ej$Wбߡh'ڛɚg8rCmM;5 I}I'^ K i)X}$9%@6 ㅣ$0fG ҭ ׃[SH`yyE`Y1fXD赖JARIAљaW쳏/]7ՄI2ꈗwytWA jW5N߿9৉|gջ %Yd(?yΎ'iR\3$kVLP:6cJ 5JIl?l aP .d*Q|º|3 7ww] M=8:aMyz'^u?NW =|- oYW94г ɬb䕌ԉ5ONclNAUGP,N9;7:5x|?`v3w??կo~|w?__{W`c"$.К~wJb[XbߪRo~=4xkXvQOA.U0\Q՟0{dzbhAh Q ƴ|K6a^2-y`޿_W{)SU fvGAU~!O${j_/6GW%/H*CI3b` ĆRJh! S,X %$FC{Vڥ>z .-[p6:%wBbl$*dAƍFQj=-gT Dcut>i|~٢e}Ţ D,]{t'~I eD%Yb(V Hx٠+CEWv*Ҕ 4J79x.F AX[T ($֮d(\邕A ᰩyQY:V@A`f"~PC:cBV_YΚNSQ]`Q2%MJf%fM(t 3 EڄF @nE\wӏB J%>d.V!dP%k1'D CzZ͠VOF;iB4Z,0)Y/m32&42bJ@+"7*&c˚39,xmXëbSNP,5vCB+7ko="L[ gEE-D[ỌJ~h MD R8J12%) _r^lcx 0vQkV:dc/ z6;]$ [V 5d029ۡuLlIؗ:\&k'T\w>tǧv6%`~$㈎ 9S57\c =0i aߢRk |2=U*ڳN=U*JSTAgOreG0Jer|PEA1%p޺ɎXwk\G}u$cv@br)9l0:Az!(d-4EJɳD,JHlkRS:BɣQXe!H>oIY H2&)J96#"̧$HX[Yln/Á,0+Sԋ-{-ɖlVw|xsGNbd2F!Z tf:fk&mgmu bLQ10(="RQ1:uB̜t. g9&lx7y>͕*w6}co%P@`O}^hP5>i<;Y*JF58- XǮ{RE~"9M=y\}/Z|\D빾zX2 3j4edh%s=G6:GA(ؔIƹ/ dn!IRPYَ"~kus3=o׵FPN˴'=|!ioE1'nÉ.F]ƿZM@k]:fDadBt'lAhþj8?\j4|?Yksl{\ߖ侦g .֠ G.gi,5ʡS9EQG`XNLضBu4( %ƔCy UT$, ߘ\73+;N|]LJįjaOX:V^>4fhKX!m)%p$TV'fq@NU ʨ9S|+ς-jJ/gՄ&r_m|Ѿ">ڭnQ-_vu/tik0dDh2ұXQPrt<J#3I(Ix&Yg+֐b! A(BD0#0K-@<"9y*]4tsa? #@YƢJ-ٍ"p%H0\A@% |U?ꛍ 9AŖ1Pт8T2ϨGn'# ^c&)8(MȈ%2)4I+5A R# f#")Yi:mN0 ﬑7ef=VBಅද='ӯuz#> ;2ߩ_Ru|r~^w&]ӥZiL'EH7=ΎK<9tzBtW\ɥ<CN2~IA8?NJ4κC__6ɬY;Q>櫺K]͚g=y?ܰafMr?Iat~IECM Og?b*]o\e%b*+nFh}h gC籴dv$msiPw:ٰl۸Ǻ1c'dwU]gGP3g<N7iuNw1glD4&h۵v_O\9u8/W~:M2^Vg\I+V\- rZînģN[†­FPɭYx'F{e.Z . $㊕KgϹ(ZnPBH&XMYh]M\#*VFE ּ< p)=:ptk:$~Yj? ۴Iۂblcz_'e'Ht)N^HktI[)6x^}>Vt},n3[<M,Wgy>$2koVecn1no(9u`ˮ'-Ӿ=U3α+*Hw:BTd t ҵKy'CF˲4U=KGQ3FGmJ{JBJ9؜R}tfSȟ'nش`yΫ`$:0uEOʚ̧ԓ?G_G#["" V*V.=zU sO 532?5Qee>a{BR.g|jG_;EY&ݵJ:jw&& ƙKSXol2d 70YF,9H2duAUH#9G< d2/ۊ_{}ڳ=XhW/, 2I[ey1'&1m#yiA&#Rҩ|ƹN9 -{A>* J0#(tɧ(l8r9iAg2$&Ӣb^њ<6gkBϙٯ1QK,U)yW.I\QouQt9;!Ԟ[^^ήf#$AY*jЎQI&GYSD1*ViIpT JiӶUFiyOĔ$e*V Kr PZ#c3s#c; pU =uEe[,.iD/;:ժyBFh8q^ U(+M,5 RF%P$D" s8ϐ-f)!QP^HJhGZU֑h!9+P.6FfG8 y,ݬw jZG`* }*«dLp o!-rH"Ǩ/ZKm5b H`PhҙJYUY!S1$bvp3sBԯ"V`<Dl?EDe="%'1Y I!y E䃱dt\)sD=J!blZTӍ)BJW}+i֚:8 TӔ5FfGī̯:2. ߏ٬1.{\ܙ46[ UKpƺ&ɰ>J"JV>Rz/cq9phR n0<|SdYe1Ge׼ rbq&~ ~teT9Pt~k1f頋QgV) è;>~M/QwGQ~Gyt 9RXad#kJ $+ʺ(+$:V@%($֥Qf4IbȨI W6@(ե5ne~cx*m69<;l.*Tdz]5TggA< u?^oe~MrA`&Eү!rVcϹ\ԶϹs.3]բ^![jl(%f ReK5s*Pyw Hb#1-ޗD1J] ( IJe(8X5FqN%5 duZwؔixdeI^֫Jy`56V2_dHO)ieX޿ e'wuZwkup%}K$&P޵55#݃-Eұ ˿$ༀ&lV'T\5 +"`T mg;'W1ЧjW^>-~] ׄ'jK9l(bo4sU!e !r f(0ü\m,^c{_3>X9(+RWkIH9 aS28 i[/Zpux֎5 =֌S+rS¬4~q ؘ~\:7ڜ&m17Njޮ X#1h\NFGU~k#dy-_FjHlt0}k,YhSOZ+XWɗBVcNGMn~#qTQ͕&$z阗2R6>?MT 9o%NQ[:w6O>{ן՟|7ۏ~;Οx3ժHIǏה{˖7>U> ~f:Q)mo eJOsFbaʙNـ]l[]-rA;k| 6a\oq9n%tZ^@̮v|zg=.[;kqluF*D 0b\ $M@GSHdJB.@" .I(؇ KҴu<_+֤d ( wVDPq)~x$J-ꎎ-{YN;Y>8ۗxAw]ID{ëE`xcp ImaVX+u^6٠)RF#,IiK.:D!HŹAsEEKc WL\+'(-`6,mn#i-3%jRp)}A֞mht:%5S):#gCN,{_Feq)~eeAl\~F*$~>u.j[hARlh 2#s,@ʢr/InèfZ8 lh2%)5dѐP0Y㵵!;XΕF4b%( 9KԵ$P4d3^(KŠX0gƬDyr9;ک;)Q}9ƨD'h$fM cB"]V #+]EK|)P;'R@b1L6┆,jp{rsXJ L:Zt~It|_/|$kɢ  oF2sW2p/))DqeYgfvitt+wm=:z"to|(Í6ӯQ.& (DR㿲7G=8zZp#0 6AH!%0(R!ZKIYI"fKۚo,]."{%V `9 ȹNN+ܽAr3B;帅 WyuKAo>6elZld-`F̂;d* ]PN"`,iS o'»@o/ĠY̦57$We\C׬ oȾ1ܾ\.*}dz,++АOh [0N5&grlh{ B*՞T.қQHegVSҽqgz A< V$H1\M=`ud:ld$ɸ N9lMQSRD!F/k$0RKJU&31>쌜Ӭ, d_qkǶ}Vlfz uwEsWDJڬuU+JȽWWJkFuY$\P~\a,od$4u\Sj_yī9/>c A0)xvWk05aW5 Cݗ>s)9l5!|lKy)BAHjJZZMH+ŧ ᔃhɇd$EI9# #3 @BrNg}69ї~V0ΊƆQLXL)y Eu'+YCRD$| {$)b$ED;Vц}&mGVEi"ĒRC f`pOꮐ˟l@7렠a$78骻:X%sJC$xB"-I媯2A'2A%ǭY: m&(F1\%cvղ^HƦy^ʮG~O ky2U͇A%\2{k ctv^0fM_>پ cA&(MpwaX44?<=5[=(m=$+VIcUZN٬BHnÒ-@F)ŘXs -X GD*op U댜M-GI3mҭof̙g;lh8>~ "%>Q͛`} =Iٰ} (9ƙ"AKԢfA WS=GAQu>mOh5[ݹxxRdF()9i2q׋YC74 ɔPQ-!:]=6ii<>zDk)lݳ a'RƳOs;>z }.x-~*oR\_ cDJҰl-h| l4,^N+W-XZŸD i8R~VXOǏk?5x Ѯ&I #:)BGլ#tL+h*YMѹ(1PTr^iE4*KL’tFΎ6vSڧ{z.doG8Zohp}{ƦH~>^TE"d@$e!)26,Yۺ *>c*//ظW bx`j}CBU6.K9+ZgueAH}>{-Txi10 ?3ܞ<>#y:!vVwp3m&Z 8!eF3tl@.0A(Xl 12rAd%^tx&ԫLT_0ȸ-Tm[dϴ0x>9;[Wo R;n;1L'E<銓7u6)=޵#"eqi,fpv,eW[,yrg[%٦q5EŪȺcvj-7uYlzBɻN Nl|^lJ[ONO-=zP-hkS'Q.{%QCL37@:@e&n^-AlSv\y,L`*[<\PV(T~` ֥ێ* z.|>Ё ͮ:ZY!Q^k mޮ,WM"R٦=K.-5=_Xp3%LGՃpoհ)HrD=B%IJ8cdvR&P* G#a|8z(rP <=WBEq*斡Q n,ZDj."̛䔒,)g(i ́>Tu\\S;i-'d(/s66}#ą/9ޞS|2:d4ؖ2-B,_hۓ).׫CMK+sb~>)V ޹*A3*`E4BAtKn-!ҭF[^4Z,(S-0~-&#u֢N'mjib.*wIt`s my"X]/u}k󢙩* 4M 0F8̲q:Z` ?FGq9߽^}n9}?}Lo*4g `ڏw̺Y7W`(:YJQ8"clLA!&̲M)4K,́1J@451"o3H!J@mBȬN*0-FΞV}H{w(-?|?߇rApZ<ܕ^DGQsd +H ȁ'#'1Q; E 4x#wIT^kN!EިٽQPj4sm<_KY1>]ʦes-S5]Y}á'MIrc;awz=_>VpʻO 2J U.}N>E3$Cx*6%f:JFU !9#hΑ=\*$SV dRmB!UZ2#gd,UbqGYe+' Š )mL7:$ogbS -B4NˢhJTm(8n$eH 4]p^( hIɬ܃D,Fn4v(eC:,%wPX. 5`Q%.N5bx["|.h 1T+RPaO!ߑ#= Gz*婰1$1I" ^:m 8a%IYI|I T~`WV; Tx㩎F&ꀉ` rhiFAT@"gy\~UgK̶6Mpx{\HP/ e}l A A<z&iC~%RkΪgL:sq0Ix/OJDȳ_X)OTN8Ssp*>Zf*:o!ߖ+atL`RX3{:%*xbAsCX C{'y+Dӽ:i;Wec}je2jdm=~z_ • YTQ E)(9IsYRlF~yf+\:{WO X^cCiaOW6Ip0DG*VP[4" }lõ\wHy,YyoQfIĻ7lp!٨r0N1S>V$.JOʆ|P&s\Ss\J8_%0/Fy챝8jxY<^7QAFҤM%3*1DS7 -y;c(f2Pˈ+?|䨊q5^*?skj{Y*,ҠWIi"1hE-n(;@q ~l*Gil=l26f݇,c:Klxzk4߂I P`~#$Rr`XipAG ` 8K2^SHQHKq^CiA '=Rx#)a;P2b #Z[Œ3c"x+e I B_Mglg^ϭ|:gɁZ|\rΎq?շޣ g<x3:Ӏo}u8t#0LAБOB{;j{44@u>8? | C99:Q1x&GQ{WK|]*yDgc.9('JǛBm5ӓwtɧT]?f60Bөrtn!Ҵh^ۋ)UF*[;R0mbRZ9mז];Mg_.M+dN'o$zGafY q:k`^Ŭgr1gŶ);*.'7j\q:k·qGo]_q4K,6OE-J";ѻ^w^__{%j ,Ѫ}$so~fMXޞ/_z᝚mXXc[e藋O/q2Its ODy-syT Rw[MJ-rA;)K ȒSoeR]mjmWr4/ Axh)@&}>D:ǨFJF8o@ I;Ar!O<'APPBl>lX = $EQ ΢L2J'5d\') -%:KN̝NROܱ]gss:EqԹ6svێІO0+d !M6B*'@{L3P><94HTb@2T+"@L`҃VMRKЉȘc?$?)JqN qKE !@Aٻ6rWXtڱxs+nr[wd/FTI-)^忧1$%J%J%Jw91~~ch Z%j8wùjCw)QM"{/Ge{SyeTŤU)N~E(E ֐@먜Cb5Ȝ(K)WwP^mi{u<پAeDm%id!X|Ok1;QX8Ε>:)lrjbf!ЖHAsvژ/ֳflgkSpB˻eL2{#[֔ 0:ft .kUltiP*pZPo`y MkIU fl6iPY(U%kJ E"3@fPN8imaYd1YaSr(a5(Khc_Q"$Pةòfkx7mRq^ĤC1RN80u 0Zho{Gei_lWy)V.F6@gE6`BHG)%:DpMIJD烅%D!$h`i yv!&l:YC~+H@DM;Y&$ &ϔl8lt>4PgiiѼС } LhfejwTJhM$=pqoeESh,Az%dwQ K>-^[]C8?>5K4}Ԯ n~s3#6O8#E8?wط۫Y_~&2_}t>>u $ F4Ĉ,r'.G\Nws6*cڦcdr2~Go:o'{ɴۼ65|{F|S2B N֚k ZO2p,EglxLZ%rOjtM3x+7>g8=*!/[& y12JnUZ-Ji- vTR-\sk>oT Á ]K.Ơ:DG矧@fdcZwmbqC̚VTRryC!BF@2N HLr U*AZ tkD A N;-L].D%H;g7VsʒYdr|D_0ΊL@cVH ƔB FkHڧ$H Ў[CМW)"(XMFC&0*D6EE2ԃx"0#r:$-sB⍰#I_IW[a{jX_#s^yWKP]IL z50Dr cRD(FiVl&:C:6s=U܀.pn ߞM)~ri=u/[8[_!0tU' !{HzDF*YV31uX-FAz#PGR%Vpd2J)Ě  H[XCJ2x難Q:jLu+C N։w3^'%i$!%>SNp>}s4>i<_H J+qZ$_/cIwR;/U'SGi:9;Nx: s\qfJ}.V2sdsB@L:7(-twBnIu5/'O>}o|&iwQj 17I~4 U|ol^_XFNGg/彖D %=-l(œ7%KkrީB,'G&Alʨ@E S -}LN.hPHBP}mB%u3q62NRrC# 0t\}R9:#.b)؈12r6ID`x0 g,k<^ 1"Ơp¢P1j"$>C,@范iby*\G:``in2l9^^,R4yՒcK|*5q9 ʒ ! ՠʹ~' uvF{e -p`.ZGvGny`}^jaī/a6Gҭ ff!1`gD&PRԪu/2FI`qڤ`0^nLn dfH?+*T14L9I4sJ\F?titptzxܿr0YVgeٻwh˔7=q%7xCglˏtLyV]9;T֥P/Ӷ`o ..|]j]R``㼥0`uv?ſk:tgٖeځbEȶjohuWo;1ۛrLvseh|T,i.<Ũ71i3]ܱ:|Y[~h/~ŽN퉕 bG`x-]i9:ؿ NwFĹ$k=(lr@+V !45OZ[@ !7h]6Qfaعh=۱L2m7582g\3N9d"rT=섦PsVc"s Kp651zuR9vuEB-z>^!.y[ukR-1t0~ c/|Oiqo٠N_|6#0Sbrԙ,!H;oA4{OX\ b׊>,m]Α}+ \ѣPhhJ+ mPPh<#(6UJhJ!&@0)m0Jdui qI"{o_A&駓%T[M˷f_ηmA?5T d8 @:[ZK^%3Ug&yyPԷB͛uDB`Bw>ڃT9%ze &F]}-䆷Pl͓va=.v~i:m3-8|&m(qzaNd1$Hui!i;]Zx\ҫiݴw%\;̣ BSocOHχy<:q;~frn4m~Ƚ^^xyfǰ~Sw%6gd-Z'yo[z> 8**o}`^\oz/ 8/;ȴm6{n{P/.E|ǎ A>Z4֑E(Zǎ]Փ+,ݫwWŷ^DYtź')@ [^k=HOwu8Uu3!-"[dr7Zt]4ƂtzrܢIHΰu]QYuExБ M uZt29brQV jUwªklQpL/ Gqz.He] H >_6*qeLS} 4b_ǿ7a'8?M_%r4as-?lj ?~8iٮK(8)JOY/zɬ,YvQĦ׳]\U~G6w `?>ϙHd_A_ICCc.e/wK.=/;6,ޏ ?>n4%@^Z^͏;gln|ӏyFֈ^o/Qnīq'9hy굴}'C~̋zn>Ok܍ m<Lq]tQqᆔﭶC˝s)UI=)K MZ+V.@kh[;4X4x%].k_[93]FxЕ}}&C/>%Q29<;G/mr-RVD]RX`R`0|jLKA}`s,H@Y Pb%tN)CV#+U!58wk8t29-kx.Xzmj(']S[bF3mV,ώ<㞣iy:oE>CMĪKI 7uRJ,ʷP.wPٍrS3@Hc]J6Q\LVIJ͘4kԙ qZTKU[8Z{%lMQ^eșѯ1$@+R?{֍쿊coEonEb >!l7;[cE)L,tId\"m@PWEFet1{_m*s=):f둴hrN1)\&ٌdjXX3vՀ;UXA/d&^5{vif&7~Wq''?N=[.5 `]&RT̘4@ ),8C2%$Ƅu CQ\D$q,ɕJx%Fu&(rH69ۏGqEԮ{ڻnQk`S1K2ȼ6ftKboo-X2pШM=/8g˳To$!diс'0& <Fc%1IgٌR?]q_슈2"{DsZRhuRPf13ipH0d&F-ye<@؆{]&vh | u7;ޏOPUK,49g ӣ XUsJHmȥ`TX.l]wa]J.Oq8khRNmh5`OLx#8`pǵb. JK!BB -BqӜ մy"h!{M#a[o+ %ЖB#߅ Ywfz649F,Y-L˕m:*Q50(`tFBF#IbMiHJ\rM+qP'gLN[15 'Wwm۞dY**֒geZa ьiZB H@̋{A]cP%1G.zBr2>9ź {'S";΋py[\}EҼ͙GHs1(tQ|,Iizni+g#:16h`z-hH[A90C% ihd!9mS"X%qg2 Z [Idck (nj}b.|(| G ;K @xRzGJ~Hxߚ5%к`O7pTfEi0RU#gh8A\FXl>I0C@12`;8g<4OsӃtbٻFt g.$RB!um ̂m[q4:/3.cS8?ZipZiKr׬ f(ɍ5%dAت59bm˙_>xLbT vpO҃x1џŷg^ f7ds%}nl]qd0~8oE"[ IZ|mKBfp}3]Ye7hc[|,Xv|6:am:VHk4,}~OBb<(q`5g!oTOU6uax/G߿oo1Cs^y-88ODa]G&`W|^.Ia bhg󷃟.>~; }zfb_By?1h^2 ᱭ9.gW[Ha\GV$j'6iIb  J:2̽Ɔ94s {~]g%cT9LxG Hj^,r4yr:g4iJT$9g:utl_ĎLg5=RƮ';{￳5;:]xnzq۱F!Tw!1+Y+VrA T6l2RBSr7hLe.:7l\tE歛[̖I)M&}EFL-׌脓0" 1T*aB@bh-K!*WrrCd}2XDIX ^89jaFCI*=/<3}?o5)ʦx7C+^pA@JN$Ŭ8qd!1 'ۢa~V[ I5[:;<%eS0>*#0. \y>ˌKS7wF#FN̈Rf \Y&Q'9[Y naTz/vn99"h%#ƔCItN\'@7KR[?PA 0Iy'K,$ #]Ԝd-℠|FEjV8?^Of62냒R&ȠbԎIf %$Fi$-s\ Lȭ遣 paY5B֋6tKg"똵}IA I,% 4+,d=H82~YeC7Ȏ+\zl!2UP  30gH!+9U»sIQoy,JBPCv]"Iw_f} *ҴEy(/$!w6ygah/!o>)4։ԆFw繂G N&!rc1  \(^mLy7o'GOK~OSK%xHGo G)iAHEiKron)Ɛ(At4lX$W O[c!LDŽ_;T-r6v I0Sn&]쾭 #];&[yw69yio!hKGO%1f!bs\ۗQk I,3. +-/h|yߖ"/_LO^雋LC1G?V yL ᗒbG֏8ay| ҬF}L.Zp2)ﮐ__+ 8d̄1zCcЃѾNZݴBgEcvzn<4[HY Zlݲ_GG_֗AW/my󉚿tL;Ѵ#j`].  u"qi|1Yz6_KH`/Fj`ܕ=Ƨ \ʲ>E\r]Or.\HS>E6C9` CKd3\%LXT94)CdZa2"@sZHJ:tHgLs+r٬&N.s7+tn8iKh|9j8w՚>}llA$YIc$#&p,e٘®C" h*&Q/*wl~!YoZ"z0x,{)Q7<9F2F*NuKaq8dG;V`+`pM14kIL)hRb@7EcڣմrV sQAQ(0b^fRJiDG$ID_UPdIɌ7̦`༷FԢՙY\q,V#g˨PC_~W;ݳ=n."ζ UvlI䭤MCj}3W۫.~0"~B.2Y*PtӕztC2ot:mTW{~Vg1[ԼRv~Ƿ7{uA:e;,QZ{{|:>I{EQJcJƑͻA{qYHCn4tq)K6 0!K!D$߯_B?ywp|YgJg tFj J'Poó3k5-|__(U\\ 0ұ&16^zd@,#)+,ט-Wbf'ιo.8s 5(d?;$ۨ% zMZ&N&{8He&gK$l4BJ= jo@*uhrz  HO Ww#Hxڀ ww+ UW]8M^bɴ%l ~Q" \/G+|WtX&G+Z ~Ȳaq_d|r&쀚{qr{Ǎ/v$HIy: `>Y$#G#OѥGr,{"._U~(w]< C1D\UjwCTz0Dߤ!Ϳ׏u潙P[V)ZaOmgVDNB>m$}P2BSkk],Hges4Y: eJ[`Y{y-;Mef&,/i:jVOX1JIE&HTZe:(],ctz4*<܁j 쳨foO+Tۍ[^ϳ~r1ysq^ ӣѳ0?@oYS;RFRGˌ:*b5W'PF^.r~3Hz,Ѿ@Gl ܗ 13-{H)vmcPbbO jnvdZ<!}E`9TPf o1FqrJm JZ9>):%]L9,P"# 98KI4#gj'lG ?~z$UNw'K".[eWu;t18-|ӏrHD,zdž7[tI1!k!H١*xޤ X%M%4%QQRs!meE+HR*ƤYɄ2N`(׺|^#lSŖ?K{QR,AYRI&GYA+TF`D́`0@փ%Og#O!I::sZb4)2EZ7kE_t}5 p3>ٱfE)ٮL+QYgS5M$Y1LMBګ (lT3%Z\*JL5JG`;Rt9D4:VR5c3rn׌J3]،3vԅZ4օ~Ѕ /TV>=ȸ=`|kr/nt2?}[ye,0_ib 4.0>#vHcmLB/c YeEtYJ|S*:{!P T:YطL] kfܮ,mqǞhxl_B%UAF}*C2&(e0HUA)6Ma1hEgfE eA!b v 7#v}9]U1FljDX#A#&/9Y _H >bXk'HO>TTO͈ -%R` TmVh5ߐ RLLΘQZ{k`ZFlF]2?zq"Of\^z8ś8>dT6J;c`~"&QQWT$2A/އ^}،;vՇ>T{Pa [=䕖C7N?~ӏ(O?vɜ1al(iy@.[fc-dilV) Q;y(^w{wQyG 9\P20 b.^9 (UbУJ%OWEeLҐ'"#(Im ADNJk݊R/ӡҜ}l7/ߜ6jړE?>Y1Fpt̐^0bUջ{54{ :EuY߬oZ}9u&uOGX/C6!Wųl]g7P/2)C)&^f})L".*9DwVԋ[_B|P*{]p%\~d B1 ч<}:~`r<<.~7}V/Ϳgoff=J][8 {սd!arz-_XݹSfN5׃~t3ש~s}\|?'.{Ȭ ⚥7Uz%@s[[}%P7m;ûm Qԃx ^1圔$tL bI 4ɛ(mkzAc:=;Եah75v@Z_ iy%uݽ8@ׂ,ug/e5XXu+g>L1:J + >FL4JPZUj޷4Tnii lC%x cطfUc3YV3|o}g}  d*)FSIF2-Y\FL)j>ͳo??gfucVwifozi]976ǮO+uwm7]G}LQ.iQ1=滻׻Njos6e=7Nͻ wU޴WF>#oy3?㭮||V.r[^6p8osG6V%'šnY~Xg|/G7>ܵynC(+o*_u.SgOMU~U9(N+RrRN;NɟFBD;xzOQλU\uix-88EO^è6šwڅswᇳӣmkA86|thʟFpd4;zv9}i}Iޙ~}@4hzd,)ώ;;XwZjsNNNoZy"f[+/gCH5xW2R'6>8>:<˨Cy GNj\isà<`&g?~ëo?Wܫg? izL¿BPOzk -Œ c%*BMoSB/ k=ҏ/Ax˄x^@}Nōѭ>17B2 Wg?ISl0(ALI"tH$#& eta\Ɲ=9m P1|EDz\DkeO)!8tڼәӳwcOc`]g~7;{Pg)=u.-عV vαDDc; RZ""kcĸ-#U$(\ Ph tGt!ڢ1WL\++N\kA[8g%tj-3% 5&Je^`!kϾA4:VI_Ժh3rnsU3M<{/ ^^r0G~&3=QC^hV ѐbGéְ!SAe)s}{pm(V{O@*Jl,Jp"15%)CΣRx5﹭\7+>w ܷ޹ۙ;}v}9ؖWrMȎeDZhvqc >| A>Xhu?&ly_f>2YuZ8Sw/}왍_Pm-W[UGCZDNlE^٭=9nn!ǠsmL/o[Ͽ7_h,N}c%@0QEF;B5ZA366!n*ɨxW hVY il= p;Q~{o^) f]hLA{+{.zC Bp /kjt:Ƣr`6Aj4I) 'Zsz=ŚTp?=PD,sm&_=f6ƫ3qHc/J@IBPIJ5>ͦŢ&`Q] #Z ALѓ-LRχY.7%P/*J\H/wG圲RLvہ|KARvM)a,EMCeIf$Bgs(j)=O3s !`m^oC_ď\P:8mې[&բ &Ң @ ;ա~Z,xc*% ?'TX;0@aD`c{dc|`~`k?uGEn#n :^&]?g_`g~-@:N,4I4F#CN3rk hNh=sPz>0kdM(˒C`|Eq)s\:nE /`kd-EQ҈RO 7l5Dfͦ >NMt> g9dy%ӣs2s9KK +Q:6>) AY%w!y+OP M"\>rTesݞesƑs-!TԨH JwƁɃr} !K X R#GyY"B i=peq>Gߛo}|43ݶoǁb F&Mb]/ F9&8i f%1́ldsy-#}:zbWד?2݌u <Kr80R3J( =֒GTX.!S4nn6=:hRp*U+!: *{afz4qyb=K'_NiV7qvXyMɎ/G+y|. bL) rE/fA4l̮Ŋ6r" Bk sWJԚ%[}*JdՔcFGdJN`_wnc^ϕh00=mmìH?> x'2JFA[[.v$O2[1w3LgWzz~t6+#a;f[%6FTksP$\ZBbPPnj`Y^^_]j1`cN $R4vdw #1@^Q?2 1NM8Z=){SkY D &z0:O(%djV\Z8qҸBJن艂}0 a>>1wbGLhVY;WG_M//{~폎v ~=~_ѿ.OϿ{y+'5x|8q]k8V^eѼ_7ɏN8?&ΓYsƛY[ 9hy1q'vz9ڔO]xEݢ'wxnRVT|zqG6πsg 2OWߦeMؕK&'5P̯=)]IF`p]7\o8ˎ̰/|w.ZK}{gf#-N\e0Dr7@,z-g|z-gC#8+D6cf#~^'Vԫ y?PˠCjD!7hЈ~hD2#i:^}~'^&O6?ϯyYY3WivV;M]o3/7^Ճ/,R8=xo) ,}[6{`LyFlT/;gi]}b'θk]16nw›8ըI$L5gkk$Tq9wRuyDb7 u d2W/~wv_ͣw_~s}RcZ?,ƈpjW+Jŷ ,^AܩћirGJޡMTJ 'lrGÛ+&\1bbp~ޫw)(q8] ZڟYUOYuƊD)6BѮ%P u,l{~ eS= Z/?B`cy:MW{$o-g|y¿ g:Z,˥ޙ{#>y5cЏKzJRV:a淰}͑\"޼M+Sw-5{F DEBa䫔/]߽ò X~*nZr7k}^yi~L?eRW[wV}k9*q,q!(6O3BrYg^4;Ry՟SݓvVONjy/}xVӋR&|Xrc0) 9[;'1(ZP(2yv|.)Dz `Δd㼱d)HJlBU ` xɻ~/:$ټدwQZ jL;o 'XkѷR2" V:C HKDJ>$եeç*r$CQ.:ыmTUd tTbTTrr"OwvZJ1eum\ʶ)[̘Mr1̤{[4T1}/wk2 by@й{ j5 ֩6٢aXHR1jgNV"'5-9{F_5\[Wf1DZ@!–U7 #MA\fzwʆKɅ~ECPI<)J$9%W'!d REnUE`qQ]I(MsM^AEHb d >Y<&tu%Tw:[Wo鋮jNldHڨ%iLa蟽QX~iuT]Ύє YiW$Z4J̠68EREK(uue@BzG.Xd(UKIo2OфjY5tL#*R),`}٭XeQ*B ok|~)3&Ѹ02QLd,= iavNAl!& [Ko"B] 12 3+&)!D$6D !)`r / [cy]Ztj++XX3 #"իA7e*nNW z;0.`;ӎFey)\=7"E3!:"`L5J57ٍ#-Qbkk-vpL< Tjxhә)õ"[kA< x@"D< x@"D< x@"D< x@"D< x@"D< x@"[<&GN ʃAJy@)D< x@"D< x@"D< x@"D< x@"D< x@"D< x@z< 1\\;uhx@PV'"Fjx@"D< x@"D< x@"D< x@"D< x@"D< x@"(0x@M`q9WjJ}<&&[!|x@"D< x@"D< x@"D< x@"D< x@"D< x@"D<7<z+>8i}r}YM_Jɳy(Mo&/i(~L%Ж GC[qv%H)&hKep0wD ј>ChXMZwI97h\#2WM`{!bW29wF?FWyYFw\t܏|/{N8+[fvn⊣ QeRK2_Ϝz~sg{ %8ï?h9&=\ߍ.yǿ[nOo'J'"LJgR̽ץv\/z5;I_zĦO7|Yo8 ݟGIw}L}*[qmHe)/oڙ~oCIXrz54ůxwrYw69;w>|*Wㄞ?7sk amumre_l/gjM麌¹^xX$݌1oy|NP;bӶguB- -4B IBJ&pu4釦٣"}7Ab[K$-kL,BR5gXS`b>A1}Crr ¹R[g1*9 X4>dΡ:-?\[ Ësز}ҹf,N[m^u֫%,c}9_W j9|h67+7n)5Oo& }s\ͧ_ݫkSp}~c$'4cxmvruW+ڷ+s]|鏋7Jְl)$W!ˮBcIKRL̒ ue0qr@0oWS"3d2&/Ii#Lg*Exx}rID`4J%ۢ)y٨@%z/n0!*'d+_*%apDVa]4-W^CtvˌT/Nq')'| O{|7/l%]Ni]ڶA cߤ\H;~vnW>MZ_xo7);=㽽^:/w>靎/:.O5l.&|luPt;T`!U7FןXq%sɿqܗ~L湛/;\E/:xqϞR:WJKԴ=j$GBlȜVZ]WNˤtrd\&:OƎw3͡] n}0F𯓋@|6-[zq[C(o6\b4OgYa&`JHnr)*V0R#p]=v}H"5-tC<~Y-|hcҩm縈VD$x:çH'{%X|@'~@oeM^[JRyqp`=L6&|m~f/mIP ({`JNCemֵ݆o[>(X˺GO1+/k29M~ 6Yhukjĩ;|HSIc2ol^{5ͤw mx?#b.[ 1[ArrM6\UJy-(B>re%ⲤiPp=Dŵ`<3*Ԋk _z4{`z漙˟Ƨ@dz֚<_qbMk3C2rElRk}?ONGe88p=lA_E|:46yAMy+}+Se >zdߚ%sM^g5ĪXע޷EE#wp8]P:/vWşOxg_X/vW-;+OooMf[|̗.ۡHi2xrWm-ھli )ņfef}m"Ew;Mv_هmB Ln/F.b͎ge7l 6KƸ`#3٪pїRt.TonLU1ΜyaR 眳K__B@Y dfxܑ$AK 9]RD sEtL&_F,>%dxNFoV`kVo._h}lm<}e}]~Ftط]b:nsGRdya*dJ,r^,\8@X:[XK]vTxJձH-Lq zcu ŪD;WvxD+⒎eɋ`4!}jn ;_=܍>ڣp;uq}1Aǥ?eX|AP5üK' )Ia-0m⛁n3XLfv૭,y$9pݒ,;nI۶t8"ɪSb~VM3NI-UG&jXvRTM`fKr({+jh{BgZeW0T'-Ck"QpLA`VG5$9!ݞ9 @n㤚flеBtm=}1sExV5(EJW2F1;,J!q: 8}>1995 J8X;#gߊ^O'ynjzl΀Ŧ vEoY>eI NX#z|S4nT69gwE|Q 5H+βpRbiP..nAr ;ҽA9O6=LP2 ¤I8t98%p)t GK`۔rcp[8dՙ˄+Au G;#gGrާb6[~08:]eDAIV$Iel5=PBr$8f6{B-0|mxr^_[̻˭ْ[?\b!:=jXm5B+@c(\y0m %q9 3Sͨ1f +^2NfTO 2z ;Ys.Z3vFvVLvwԅ:օׅ~=uz~QqKTOv-/sw;ũ4h48/\c +&fe+N YI2HI+!Ɯe2st&tFdi(ƞ#Em B0jQg혱*YЦB]]c8;=wnk&)kcfNIeV i&(nd Hd!2HVtĂВQ1RŘ]g]F2ʶ(}ш]5"X#^#q5́3N;KY#gD0ǽdE# `DFu)d'$RYi.@R⥼fdI)X#vFv4R{ҋeR}uv%wՋб^^/zqcbZ2A)fISIh$(2*D>d^/>^ܛ']q]XPa[2-6#=r7Y?0:zG?>Qc:~.N3DWay#ۻxGG|?3:pRRVL ɂ1*e+`&(! 먬Vntv`#7(`7ɪ=J(NY ,E%r+rv)A]Pz~vlks?|m.z~/>۔&F_^05bzDxU0 ꢄqf?+Uh9ɴҔY׻|+ڔJR7,[7{wi*Jdzy?->8?MHD^LӽȵLnA&R>Z ҳ$t:plo84o9 h߫uB]`Q6h:?ZV* |:htMm雵u~<8n9<)V,ط܋žm,ھ'=BfѲߏiOσ Z,ϫkϻ9YY+^eVi=o[OdJB?ԋCw2)٪Z/"pէh«rUB_*bJZcDû3[u^%ph,qdzI#o#f ٨h:&puZnO3tPFnl{DHK1şi&Ӑ*)K.jĀ|z>]dZvU +%Vp2Y@g~!4u^U= nYnďgZR=}$|Y`IW%-+/dJA3HZK_i ݗ 5@olP"@BF2Ht<AvuӗP:۷_~gqo}(ft؅wﺻ{WDn$mV®煺 {3; 2o㵆jWԺUzs4ٖNWm6G~lRˊZ.;_UZP*`|tw]E[:^p*1 ko]_Nš_6^,hQP]ow"|2݇.n/[[nmDyR,|U_K,qgYV)UW&[ $rH*[i=cDb5{ttd~<ùimАU9!#),hf7NDL iЊU AHRkNF-`QCQL~_穘@ZMۙ'yŮ3tFΎJu ).J-fU[< ~>[e499lYk= tV] x  gF_\1iIHIB` o&i+x>bVJ_ض h@k)J=!pR h5IDr$I Y$@}< U.qjn-za&w2 Qے?(WАm$AEF)UOqGY3mf\[h?)y#S냁ʁ^HǂtCr2H1b@%ƹqZd2Fcf'%hR^It׌w} ]H>-n,%*hT|_G Q~o[] $[ M2q| ?Ni5^w~V(/R>Qlo!B3+2Cg svp\73?^2XZ}94틍րv?lpml.J^6i$BYcd,3̘!(9!6c;ʥ}t8yiQ  f@KK*R"۟ts3K}1OܱYg5{1}jRZunt عVa;Ocv9'd8HJNW')i>gW+AbFT [|XbU#7NYՄׯ!?H NY+qz(r%}N +o!a6ҟa\mJ! dfj~b Eksﵴ-PHhۊO'ʹeb-ѣ}JElZ%Z_5^Y4P-K:^}+}-tWqibqFĎbX2Ǖ*sJɪ)D*{&mz}J,PŰbi-RcR+4W C'J'eT*#t^e&0BTd,ȥʽdehHh9^GHCu;#g}A'o_yqFy+=<&&vʦe,f= `! ^ 7FxcV82E#`x$wi< P:w<)r,Qиlb>rIkXk%c;3Dz(81rK& #d dilRF)s7'-g߱+}}96^ @51tLf#*t:tC/] E~bg"X$O;\Q%@H TZ"NgPٻFnWXd<Wm_+78_b Y9J4V+q@ fOV1N{Mb}PRJ Fqi9TKQk\+82KW!rk:(GMRlLov ̀wYV|"阴!>2T,0M8!ƒyYpʔ;"ض쇘ٰc|ņȞoYlݹUb)>UP@h)uNw svӣ%c0;}g-<;['vC,R5Enu_fC$_!|oLF]4q jL(#)/6OŻ;Z6|\4t]%qsu.JF7g3==a e` r%g?MH qvH 8AdKpcc2,^{y =9h-SV)7xe)ir#ЬF$? HQ{5ܜ<> Yy߅:U .+IH%σQ{ɷ凅4^K+A s{{g5~xqf8apu.n޸ߙ^M |~zKo]̈́3xpﴇet5mj6$Y;xz?'폧y+7@I%CyQ`&jDbJ8+jn8m}_qX5NYm ?0iOњ*kV e߈H9?r* M;S{g8e eg5)(r=:mN~%,D". DVgV=nS.g䬏Y5䣣D15VݵO6w%Ȧix հY|G{㢮ۋhN;侤wp-VivL6atљ$hS$F39o@X:(|Y5,&Q)M $ qj|xԖ eV1'8F1%b b1ĻP'_E1(E͑Onl%wD[}mї*<_y4JF,)HL2V% J@pJ;4.1Xuqw@ 14KRNaM6hDfbpD8,SA ,nX7%)nүԢe?JL5Ae@#  'z!z>GJt0pGX?ҝz%i?gɅhKyᲿfy"f pJ:5RJwztM`|D|[Ѡ#4]y}4SnT4n u6t~9?L%L3R,Xm gjg![kobhWgToɾLVUJau8/U%3bbk;[Dߩ5FoFm.5hC$ŊZӦ⠻ ߸t丿2^O=Y# A? wa~Rpdƚi}6A\^>7hnzVkbzKe0h޾,pu{1F ;~Gm>̟tv'zN3Oҽͼʹ3R;Ua| Ѱqx.?d.o w) fPP 6m t̹WԥbU+~1woؘ_[oUu*{uDnNm"[znj<6-,dKzoԲm|ZĴ#h&(HAh! "\qҥ:Z*=B!9,@ 1Pk %ՉKYJ{ ;kΎ!~$7mx(Qg!dtO"|&9o˱x'/s I%ucrDkV5覩C/((*o^8FZõbBG"di/96Y"YǸetQdTF9@f)C聐uLֳ UQ*KRXs#c9R =PXh;,S >$GZPjt싇0E(m&֬[w_1 BQ41;8ua1I[yuW[*P\5rڳfnW4~-͈TG*f42: zUWMT@#+H1AĀ'HiUrRJHMvpS>RlJ3.> '$('PRWSrz9^pY%˗7TƜ&Q姶E/(xqb} Y!1L\Fbz@o;j4)vQvGo`x?f=Mo4&n-n4-]ߟ;{^yK&ٹ*R%VɼʼәTkL,:yj^n%!:f< ,mpNDH<a U[k4O^m'Y^wpk|J 6T(Sػaop$Bsq~_r}Tg}pt3l9m_u=LЌYZפ'˦y4m43_V9O{FlJ_Hdm Xff6d.Y葍3~ݭڶtQ7EVYsbՒ,CJ 6הb7~D/ƣT !;X'71NIQшjmz<"_~,Yp7J@geQ:o*D,;N53dyKe& K`* JBy7O<>_E@^p\}:-j[\p‹Iʇf#'aVYF+(0{7~{"X]0F?]fEh?8]|K0;|1W"!9JMO _Ho=zs=iM'h״!&jib.*? H < B#U!u42v1NM|dZgJ9,wYm3RnxR7֦ GGQ]QNuNv/~ud>=_~OZcVKJ *Dɨdj#D<5tfNQ2h=| fsfпѮ~,}y$hIK1<+uҦ K<.Y=pu+>c#T í暲ĭDF@z)wZpˉdU[$%>n<R" qǙ&B\rxQ"ps("Muّ-E }k;Y.JtlnoXDuн%Vx`X񇓖rF;IgY.T  @u* #B PC-#_ ǩ_x46m/*X,'\ӠWIi&'c.ъR-*BLd P\V <>V*ea_zEYwΊ O[JiOykw;T%؀y m5,%xE2NqG1:h㼆҂NHNJt @}!po -mK:&u{gDV=5lwHK3O:>i8+S[da^;dIRB>a=aivi<͜]юOhqr"˙ƙUG 7 C_\spk>iɏF"dPB_J2F?i6^ &}%̹2ǺoYH1^gK~sNH[KۚftY,PiC+OOǣ墣6MHk@nnum=+>ΚL~\HnXy<4TdqBi1W3J57TE_]`T.#W?^o^>{/޽z3*q-¯`֌uKީYIŰ~9To`s*Ej>Ly,uy( JcL T!t! ao}@]fЮ]>rK?_oIRj*}ހ.H;>6^[UmQ*mumHn{wΑ x!<{2Q1"r>c9I' (@!ƥ!.8܃[G#"ɨՄrgI j/ϓhL CL͞41q$,{]$aL:;,;őCg wy;wA2 !)rSr!Hc2=KT&G]t6grLOE @JH !tB")(=XnT(G&)/R胴8aYE !@Aq+|מݝezHd:a8 R)ĊAUS1<98CM8N ϐhh@ $4$pD#Dk` hX53]$B{zىQgi&H&$c6*eRuEKe HC(H5W"8f0=$9IǨ.2bgުoe[;> Q hYCL'3v`$қ,:2JWHb8w=^sK>疒$%$ LìԶI Բ?s;\o:^tZܕ;T6|92 g3cfRu3 ^bVꑀ7 yhӱ+gf tsչOzzbOu`KWϟKu雫MT,Eެ*Ē%,Ѻ{7u75 o<[aPZɛCN^~8ge##ͽgCM[*n]c|wvop9Z/U>|}v.{-ٽjP%Zcn=7 n<+{;+H~< Y&|wϏ?7x啝'^z6mr߹oƨԟ"<^/?'m}3UTo/)8C$dEҳv\gϊ֤Y\ه/d*sVmЦU^f;y-aޒ8Uv:-ԗo?<`h`w6~g6=o~)j1 ^e;+q'KQ& So?p7W˦“63&Zؔ؋I8BCm~fRw4xW qmԂ1dkb`Y6ii="(N!$84 BYI2rJaTe+ T!It{P\\8$9!ኢYf҉$9z]N씲`Z.ؼ^< %DY`y,YZ}"K)L{^X+XclfiRX~NL=&tk.J]ksGv+Sd'nVRdUq\k{?dr HJ FU(L9q,*K͡+N]}J :"u6h:uUN]}`Ẃ۬=b޾ʨ tu^0sVW-YE"_1)~pR Л};`sOs.~y9LzEi ?ߟzxKfNaUG0Q`G+HsaE'[5l)ag/r}H' .`v| Ic|@{ r" hՇ;_f e+ʫhk# 93n-N\կ߿;)r-,A9 JTP9u*,mD9Mƣ~fR,|' JHg8 G^kjE4[ 7nw2s2'2 #ƘC xC6O9W[%G@` dm1,.䲴:RΜ=vLؙ]s) ߇r3ٳL\6|#)vCOrCOQ h/FNWߟabih&t8m Q >+Z Ó: IYg8gvs#LF\X6y86rA؊'&9Hl?u7e >-@ר vYGNI7]x-sJg_*ڗ~<`oTZȈF3GdMᩓ|d9^yl76XMŒ'!\Κa'2$9bO1Zks R%fϭc-t@NNKᨄ9Z#؊Y%+i!a#=%Y v|-膾@Gtʑ޻5f q,YZ#KTw5r(fw[L- #;mA?\i+*4q) | G#QH@l8|̺] :FFL13w9CƿpP_eYkF3,p|?G~2 S\̈́ZMyX'7zn⏰@ϋzmήֹA*|.\|߻+r;x{:8ց;'q>$ <Zomaܴ׆*w[h#e3&'#c=t$Vcn_)s;HADqכ\7 37t6C"/:][T[nt;p6 U^o ˽+txHBXI)6d<o`^Sf(S#h1wV<V%?5u"Eٻ(Ot6MTXXg3l}7,~ԦilAN逎B7+o/rCb2fЛ;4Tsv\!QC=gJ?_;9_fw-NѸwh)c\o0RO{Wg*2aԘoYO]VUd*ݷ_VhJݛVUlb\D7)% h'%!kE*f[J7=nmnm㵏g#]ư6A Zą9^\Qq'uafAx݂槿|oϛ ]D4ADs8(40?O>?9? (,$\3M3F!o;:b. ̨|b9]i:_!>z-%g=aw&4eSJ 蕓:H':*Xa@.(D!cxax/J C#x%ڗ,S(剰1$OcX2tbrs X{,Q=, [4Rv$`@(` rP$iFPd+Y'C@h f3o2xMGl/ūL_ -yS=U.յn [ \LCw1A8_Ggy/q]KkQ\;.,wvqiJFLy0TJ O(ˌNalr1μ (mŚȧϠsujRٚQP!akLr؏S.7)V0"\ɋ6ԤȵE3(sv;ul0tnI_^`NgK:hM:nJ=>XX8DL!šN\sX,u"G{x\+ෛ~EwP]A1ry=!G( \>eC_ͯn%~cߖ[E-ߓ^**3s4[,3?Gsq?3TYnz{z3f+;zxOQxUgQvl_8~;wO=Nr8?]j8qSbv3ۉk^~+i9vF8 J2l YjgwNkdw] &)`-ZMy4l?ם|-MA<{ǰYdhvM͵+?FXTӁ>-j ߙ܌e|?~,jɛ:ClslPکfgc᜝0/Û񽨣x8߶̤m7m/}(snMˆ1 Z e`)6:?uv/Y˴&c#1$1w kઞe0"NKL7@N۽\ęvf=Ce{@Z%`GKιh2SMA %) dkMr+:6@W= =кNjZhH:X,z4:G4 G-J;H'H ULߜʍd8>{ycX34m`V=F\$rzKքЁY7q6avd"Ywp$I6J=Hh-MK`EXR *W΍vWza뽷[>!hV}zhކdo&ub}|ؑѰF%D$LiZbǢ &C!8.KGC9[!"FFC@c[V[&G$JQ*dUض[gu<'ió]__sspcd_ws\l Wij)>6 j&(W -B'f)3ʯPr~psE14,(1yH3aI#,zaSp 8Q]8b1JibbK :gЄC0* "F8-YPTهLρޖw`m wJU˔Vёd9ʙd+p0RXpJ`.sxl#V@2>.H-$G`z9NiBmMj"q9ib]ms7+SMm(wUNƗ/R Y"RrlU)$Jo3 n+ GjU2-<}W%OgYla<[uя( GbF'8<;J$p9ı=H<|>:[u0OrDC4u4m"$a:6[HR_GTyƷS~=E6QjP;XqOsrI(fZǟL<>Jip疆Y9>yC=܌JФq3 ۛ/rptutxW2f]q@)E"%SPHz]j>j mmF¶y9,FD}v^$;]ҩ4sś2j~YsBkV6}K޴j(6'DU:l}XvMj Tn/t5:|H\NWZ+2bFGWMRn5 _w,^)dгٺJBII W{}n(\V WnxjQs\d!ڬ$8 _\L E/h`u\۴r`QHC')ކmc Y`H9:kde\Qm bpcfXR\bͦ('>Uh6kqtk5(&OˎӱR}GOֺH.+ G1ͯxռH3.5=jв%L_ 8~&_7ROqqCn( 8dgo x?ڝwy[%Ҷhi:>>:?{|WuS:ĭZ#g|5}Hb&F Z'ޖhFNyi^7Wiں4/>oerE{gl1Zq7'i %H3~8os>#i<{k$'{ Vk,Hhc [D#XFlNjg緽$ʧ\uJ$$'f2R}~t}1N07EmC 'FV[:M:&K Ͼ|?W'ɳ=;y)8sHp$rܒtn<~ZsbZX} A?|9 hB/< -M}(';l NX7 ـ+70kŔ x5_ys\B,Nuـz|d] O);W[H~FNQ"\ $@Lq%xe&̀,*&rȝΆr>:<يZŨq< eg4 BV!˥"5d)ZsPH8u{:utVk:kr{]wv>t>(kbKt1] )."jqKv2uƐd _S %JJ cc)p̓ ̥Sgs&(290庸]ߣ)MSd%fLAv\)e3*Bd:E/rk`3өĺG 5FJ0j_/RDB1 $` J\7NzP-' [h}_|q^&ڥrM*(~G(3's0A {dBA֟"cւ82EL3kAI,R'$w־< Hyx,!ҽ9jF2Fy[qrrvwt~ƪ6좗!3@|8a\c,ǔe# g`Vj*"v5r6۳L}*g~ye4F9/IĸNhCh/Y ^)! #Cބ {$)b$DDtZA{\BRYH HRPET̀.Mg-6;XPW=iL+criAZ0%DfB̂5 R"7BBIrg*BIg`JaXHG H-P KH& ',tP=Q#W۪ 5/(Թ@wࢊJ"lDzRٻ޶r$WbdRtӍFg"i %$^ŲcY}l֡x;Uůb}UsZiR9qGlwrւ"ʘ|\Ƞ,!zj4BlGƗT{ lU-)(bBXR] &Ɉ S {o(Q N]GR$Lk8蜴/uh2(uPrMD9RX?(~NUugno߶tt$0R@NH^G~9EC$[GZVJRF)ͺ .>Q c1#VIsI*cv&*\f"ҧWY!?g+>K(Z=\[ƭJ>L~X/4 j3}G[4< Zb3χ'i|ݠp{>z gGn%dL9ti>'Ӄ|g,UBHBY#|`_u7] "m*Ay4ݗ~XX9@gfwj)_ΰ1"M浽^p0;=_k~0;Ne_h s&&_NH+UǦ}]Ӣto^nկ'ogNjޮ[:c Μ᨜ sܮmFOՈ޲&$^Z%0tk66.,YiS3XgrWmOV 7i<5%/u6lX|~OV}>ͽqU17s/*`ԲL%W}g,bs߽|?~oQ*^q30k(v6u!wׇt ˯MO坆+5iʎlKa7@+iv:*I>]9 aŸZR]NP=m) Q(D2R$U|lX=d1\H7+֤dD/$Dd$zUqIcO)C&!;iLgZBts5UuXՅZQA l_$>#"OH[0 F۳'!:rBtd!ʻ(GJ8Wt iT.*h''4KV"l`nS@Hqv_PD][h k^G ;2Y"s,^iFAyYt4=*NEe!Ec bvtHF)l9K5bN-*ƃ켵1k7w_gugg}۵'^lg^~}n59?HB2ATlYS.D$bA(%Z]V!ѥNT" C~f*,[A2lӠP4c-(+AQ$2j j:? >,DƘ,)9xidQ1:k%G$j@IV ^&/l3XG,ӶrX "&ra2^@NV Q a-m_7wDo5̢/&"T4m(U(VfQ"RT^d"F&$%BĒ Qz+0њW;d!Md~_fsw.oH԰e@BlL&Z]{M:JBDvh2@ LWG 5-WSG:Y9[(2bs_|4QК I\T{`O/ʊV]Q{PtA,$ʢW(I'*ަ`t)+> Zvhw m ǒiHn2{2?"Ԥ!iy:jޕحn\;tLhK8depQVطR(ǿ`:*=38vFsR!AH+LR 1\IH"fK¶ -.dRVArVƁTX( DZZGug;8b8W=^l(]'ljwDQVu/ >5O0YC6W̔`Â$]ҳ:E9AEPX Uqc{)o+ʻ@o/ġYq6ٲrpk^-oɿ1JIӺ]l"@>D}ΰtLy0tR{=.a#ҕʇ9wVPU\r+y&Tvj#g##IM]Θ5Eyᣗ!Ac ^TRpE$Ac0֘ak1 ?7\662׃slt.ws<] x*\oiø5fb%U/wz|6~n%;6ӧKϰ~R1\-ݟz6U!wfS!sBZw+V$vy0otgZd={V:|K <8h7F}GJ4_5io8-*nkRG}knJs_Xꪯ{noL+|ae?8Aw A5xk_x|jygNߟ!۴C:\P?zRiZXwex[m?g^•L{V[Ll=k܎kYmb}u8w_e?j:2۽}6 ]-o.;f5մe/wmXXq~|uݛW~xpl`O{gc|2΍!7{98 |lz,dz7}/i%wMBzׇ>;o6[>cmĹ.نϥh"B0dBo$@T02LսA%$)H8T0YtA!HJRH³SIɖ[eRɐ:=MenəbM"&m&_2S '/&4eg&}JT8U-l)b4UhcFȦh(P&֔z< Iok=-f}o,8I t5+WC|DIx]A,Ae^ MdZ^uc A\岵Bd@0nьbvX8fYhrc{cNF清-@:(b?bT}Q'=.?} uX-FAz#PGRv]=k$QJ1&K0(?}墵lS7a&'f8;ޟd tG˔p~/߾v癠wɵ}h:h^HuJN*Gqțc%߯lp}`壘<#>}䈚:].V e@f`;`CteG+99zA lʤs˩]B+r<ࣧ[}8;₟ .8Ab>Ĭ'Z{.۸kֺq>H@J Nt` K#Z׭ugIDMƟ/Ei=Cq2wgo.}}i8`3Xܜp~k?+(ty4,Q"]DJtX,% :dxKs,AwQ,B#w ÷cLe!ƥ"c)ʨVd`sH(9sA x 8} l__Ŕ<%b|Tvp662X44)5F`0"4YQrtG]Rbcdm%cxƓqLFYfy>c,DAE bDH|١#0"%X(Y#;ӊpRf8l9; ^xɎ;ĢJLYjɱ%P>Դ0(K6,Wַ{yQgoQb? mW=ae;^jÈW^z򥃷bɖ:A8cA#B dKĠ:3f>\k\rg_#64O.k܉z M^0Mr9;+ZYZ=ēco2cVob^O{7 *Zd/a}:IM6xr'A<Yb3G oVzDu?<J$7Bok |4Meó;iKz~!?{Wܸ J_byQV*v\\go>$U[Ԓj7WH CdY=3=<ݘqKu,Κ=A{QX#\V)R~Y]Hk=[mZWUUJI5ڷy٢yIT .juW]^A&HV=Z94j X"߷EEAw#5?#TagKDŽWR}yG~dq3W%cXb,07%9o ]y||u is`۵gf?G?\!r|?x) ~MY}y;N, )L%ݙs_}jzvEe⟧X3@h Dx'̪$Sz` 0obgViISYޮN3fs_\mB(#-^2E(m:>D|yOd3م!>|*Zr};ˋ`ʛp=_#GThzW"TɸqUڃRE8ǡOeeCjϧBNIXW/Si;?.̆"t@DѺ|wMaʥ6?L:nj?~uɡ^+k)L@yM>jb w `JĨ6xΨ!-Fkm6Nxvlml L& ǂ񣕩-|2-7Sy3疱(.sT؈)np0%sH[]\vz+-*nv/}Xlp]7A0ɵ r B($<3`MLj9fFX$4_֝_CC #!o_ `v_}b ME4 r}CG庩pt7\s`?zXvo#a'8,!KWrC ~>aZ~ewv#*)-BXsw&zG={QnYOueW^1F: lp6HF7gӱ 9^sXe{Qzܧ`|r.o=z.%]0Y`^t[0[n4 ։fhSi@z }V,%ra(RjϿA+.O#XN.o0q1yypt0)qa 뢒L Ump:%q?+qvQfPCڧ`gU+tv2'[oy1,wqvnQa uQ̲VO6nb2n/OjQ۾v<{/(cQ܎MQpohX/29/@Bԛ3mzS/9 ȅsc̱r:г{`Mt[3Y3J 7,Ysʈr4~ﳃ5OCYQzPxh hj{2uBR'Lw#K8 "E4U[p`iĴRh ␖> UD\;(˛tsz9xIژlLY_u۶X7Bgȝ/C}OL 9 OYY 6 2 mu4HOLԄie(Ypg74 1E@QAsgтiR,%! j`QELM+x%zx<M㚐k]_:x[656Eecn+z,K5CNJ9S_4fPH?fLJ'>PM@bRzO}$ѥREj QӠd2ʘO#c.Xh[$BtLb"cLjhpDx%KI^:J x$ )/;cƌL"^ˈiDhnDdiiPnSJD?O&ᖇTaf.ZXJG8JǃQK✵QA9#@3B(Yj`HjX 9Ny,,^roA"2/ll_uyr 4S<3NC}@٪dӓW!N_ϢdV9}+hNNjK*Ffd j0DȘMȘdl'6sfBc+%/jNo3^Xuu7;:q&4np \SD+FH?s`*12NdY謈M !{DQ` yIl;t$ "*3bgg;b7LSAlq"o_  0T`i97rPo5ҜpG$R9Jgȅ4*S`8u4U"> 'H ؒ%Qc}904 gg;VFHSAl/"̈{Dq%^ Fvx"0iu 2DD #$$8pK* )HbS, A94ؚH\JS;.q#݇Sm'lq3싋$3.{\Xc<*-~ȄcGKS-.<#*dL[Zck{\<.L\ڱ/xH#@"{MiȭN|1<-#@]ʬc*:Y%DW2$i9*IJ*0Tr! J$iٱӂW %AJR4t w7 @ (!/`!LGؚR)kmK߼^} ?]uҴgMAΨj#OmUݫJ_L`0 G~oŻmf/(y}xʳ,P}E1nZ<ٛOC= 8Kc=B$[S3A6SEgҖL&jͣUwq6΋Fi].Ӥ J h>/_w?Q~)Io `uO_ <2;%L̍Z[^jVZ6[ggT֫"R;BwR05W)jAO))=S{LMdL,kS.~+~e0^^~Wj2H^DwhZa409gxIQASﷂh`: ig`:+QW`:IOR]ir9R`g?3\&sʝXnR uZpEw+վS TwRvu@ZU^!\t UWHː:uJRb++1]bW 0#3pe$8yvTWWLsq* ,Dg*+UW*IũHyrN:WI`vjv'o &)9•DXq!5JθړٕD^%\$땑bn>Rp44YJm+w4j[؀\Uj"iS KE}Ta_6%PQeM(&LRc$Nj!n8;d:)Pcu9-#NnMr;nlmzѦ k͝n8AdElD]>06] ] QRcuphDG,A{|? h;'<͐/"+Zݜ"+ 0^ԿMRpѻ_`+$~sQgMRNb[}2B>m[vJP[b#Vjntrjˆs U(VUjnP!3n2ljsgo>%@e: s׵ \87fhq;U:_+wE6 E?3tƩ i0ԊR@ %'Dc\t%T~h=9F4Xv-'@ezc#҂O\ͼ!ǐ> LW-W dyl*V3ڝK56ʀY,Cj+* ?Y5hf J-c1KQ\6 6BUoڿ|,w}R(]&o˛^>ޟh|΋ I՛G00WSuCǷ^}YQ I|0 =~O5Ck+rXvpw Yx^j!/T2[j v׃ԷOx^{hӶPVS^ʛ)hjd5(I` #Q?޹5rlwNH}*  |nrW S:Pe+\开ᱝ N\Z%JzSCW(؊jpr) r]:Pei=>WQtEԃ+KM-.WW+ɸ555K&NE7BԂ+TojGzhj?G\)J1T=^T5 *mKn"\#4gք+NSq'T@Ѳt\J;ĕ*R ׄvry5=VPfW2Uac! BSficҊ0 gULUXiP0,1-h{5ҷ`{,\m vh.yۡu֪-ե1l?=ֲT [~R C_~RfOK")`].Q^\KNqM-?QM79CmGODW+| ZU\ZCJtI* X`Ey5BԂ+T+iBjpW+.V"\`Ch5jZpj*WWg+jrW(Xjpru5 t\>Cp%U+,E5Bׂ+TwW 9J)ADM 6$(Zpj)ѥ UR5 q ZiBT5B V3*pu28+,e5BM Uڡqe)d\Pu$H(&kv@pJ( m_o]@jE:T7?x>>}t ;3?w:3 [_]W2?OaؔfHp-V~ dG>Ҝp/@M\ 1|}ms]Z#d a:/3~ވX1k˽{v/7贗SxY{sm~w_@08Ep5 wqIF_8&յKJhiM \$dȣĽ"&(9S)2FXBs\O#gӗb5x o/q)u<Pcټ=L>ןnRkGObSt1am/ob YOPE:Rxm N!|g?_iczS׊ߝUH\7>`bUA4x'_|x?:(EZ벾O\KMo϶#-o6'zp@o7U}/!}_;?y-L~_^!ץbt֢*iw|Y/~B^Hx}L0xLf6vhG.?_nty_?K8ryL_^[-JO,g{HqܶKbo]@nl| l=)sZ^<@W\)5;.Y?b(׌^ ó|zfB\SM-4 9f䚋@3}g̿;g~S@|!/M;Ǎ4ᣱ&B'-Qj!TĜ7~KKw7BUHf99$|}{#w7ࡏoɼ$UѶ-(14qsiw}L׷aVn6zV M["G-X|TvTR=-$Rݺpfj/_/7m*h"i˜EHȬG^{wKQ1fjq>ic6%;}OmV_?SޱmTZsfc-C *<T$aQe/dED@*zHYPm q11$9IOTAhSEJ9xOAO |)׎\JJffڔRcp[*D,ΔH]ȝ^iTv79{&" o.?<,xt* BPVׁI(A&O,K*$@2q{-Imq*OlCp[R+%4$ᡐhycU^FL蹠&킚~94ͱm@;޶uw~HY6}51Kf$ #olʂXKmDZk} x<I0f11 +͉' ^{QD #ȗ9l[d@pff6=-Djىs~VŶZq:@XMSS9%64$P#cR50Mvf`Xpoga(MLȵ R ϵM:Hc6*e0{щlo҉{Yu5#]䄈3 THJ\.UJh]T4q@`i+-ZںAΧ\3A3H)RaD+I:PjFrgSL7c @3 utL6{=+6)&h@"',dgJ6TMWtdsS'Ӑ@6_xU<$7y=~VN?-Nx@c ~!H#6z'0i>Ij}E-$ %w)\Ek;W+,^op/Mr~PG%\k,iz5|OrZkS\:ohj\G_w~\P_`W9{|:oP}K1/Dt>yDEBW`KvapP6.E=N./Oh<7n,^qrRY )+"4Vs"Jt ED i )s\'u` HsAS!*Gגk'V Cؔi\A$ۓ؏V=Y-e2]!퇨sPO|8>Lo?:skO-L×٩8ڭh]`9GCGW`ۿ](&R(ZV:E#Bzp(:@(dIĻ) (b6\@'ǘޡ4'R>b" p P\0‚MA$˩~<3qnG%f-HwDq yU}ƧOi*Mt{wqQTHkD` !=O40 9p]p%m hACKVz[^<+(J7hȥؿ}5㪃FҌ- L `+E^aù#:Sv(d>57UP;mdDpNpᅑ I"nuu];G3cff޽ *Z_ܮf g]קu`[ϭyt[:]D٬;Ė%\ڽm{U;oRUvyv}N&[[\ݞ6.:S+t~K ӵqޢ] ɥj.hk&fKlܼ-\yʰ5W30wQ?Dݵ~H3c}e[(1a$n It)ƒGpNڔ3`SLfLfztdBR5FZK]d[ xe"x[pˉdU[$%>t}O:τŠ +8rȏA[_F"p9s("Mu}1^z}:O[F\}}ΝNܢnA>Ձs^% ¿ԃS)]%B#LZF$R7_u ƵU|8hddK+ N)].\x -"Z4(O,j{!]V\b$<>GpxF^1]Y2qRgbʜ۸p[0)}x x 67 W^E˱] m5,%xE2Nq\G_btF"-y )ב >#P2b # Z[ŒA[1\@R85R֌s泮VgY>K}2:}JpAoU֭j~ ǯ;$?Ǣɭ a |8 F#(PO12(c%تntyPđ)kZUG'$:N(OBꥸm菚GYxŧT~Zb(,Зzwj9^gv"u^~pUюL=S+CD4׉T8=Z<6SqeLMb̬vmsOƿZoQ;ל;qz'.66~)Zo,hCOj3g1r1li+w] mcI7Fϓ]W"=d?otҩDygrg'Bwt蟿ћ>QGo7 g`=. m"`~fY]Znᙚ/ׁ2qqXmy 6ʕ+AƼ/ E_AQNǧU.&D&ga1!L[)|p_#r}~_&ʨi]OD#~6.{D]Ew&Ǩs 5 cg܎,CW< c̀k DU&OMiO$azLS$˒~F[XK%"NQrjIi%@!hEz{ۏX-YU|Nq7pX~VLE_cmQT{ZG5Zpy,Չsn04R PZA 3Q[ʋ\ǂiAqxLjM,(8DH8b mm"cpݙ8;nNt^ _}>FCFB}<-mߍ=<J՗_/>M=`D!v`JF&Go\&Y2RYڽGXxˍw  Bư(THԥbd e`>f4 b,ӳf揝PCy8_rRVwKM O2^mJ!88ylyD&9iUctraGms~.kةh׺]7ݫWݯ=*)Z}ǚmmd S{!V9[D/SGqe[e؝1'rʜx3gCbځ."ήl]NR*(ɴuY)MǛGmY%a֜N~LWJ{S-jh~n+Wg2o a#Є-U #tgE 6MG)˺82A;oݡ3lv3ަ x4%os(=;]tqgv[e-Btwٻ8+W qa8b5~څQOX TσCM3-q SU{n=5Tp㇆l-El7Q6Zit&R̩VRBH&XWLEh:q~g3ԯoHd# ]tA2R%TLqB'_sV.>%ImsF!ft{h֛zu#½}^͏('VFh[eXEQ˾{)e}bcv˻phJQޟ_~  s߷a˲\`~vC]-V v5_Ȭ@w#v{?C8E/=:^,>jtfA ,`XBτ.oJxE?-*y]-_zqmfP=7,\dzHowИ-~ߜ^jE߹/#} 26NfpSelh? %(+e,ph2t=jhwj(aRk+b|'DWXOdUCKj骡ܜsWCWTS .u*tЪrP8++i=!j'dUC^]5t y)+jp-OZ]@I@Wm< %1~&Fo~)؏w?|bDOgZOo~8;xno}Ɯ: wdB"ܥPIȻ.ǤWī9:~Oڿ?>Mz3)}5po>j 7UG!>GZv<;-zQA=3y%R$gngܝӱD5IٞD&'d 2޹n)&;#S]UEuE3%fL٩ׇF|i-%ֱmC][yjNԲncCв[M-&ȮH%h 1dz<p̱֭ihJiF Z?痤gUBg(> Z7TOսڦhj+%n2tz?ZiRс^#]iMG]5L`(}R5Ia͔Iv2tȯBW@wJ@WL&DW 0]lj҈]BґRW;[3dpTJR;]ttWHWN/xBt騫M=mӕSVCk+V-]\ ]5S}UC}W4j+ht) wL-e+ ![+hB4Lhp M4Pn,Cxqz`tO\]ywu}~yT[KwKv{&AN :=/N?^1wc2i8řϬ煫'?ј+[4ɑ } r Z+õH:#FOVXdV&k mA.1Ƨ(pp4,i\Sh]oc}_wz/+A&!6{*\nr"`ʎhAMx%HfX\[st^Ir"$PYh+Ab*ej;4KNdҶ?FIliF!W˝ Z,!Ɏ֕JB}rdKh%sBcxITB1wUe1xEeѢjY{鹆v09ZI9]oR GYUSYLqM{$LKC.OOAьU)+a͐$Q7:&UkPtP(k!벳 |¢%^3ϮEOtEJt(mkMJÚTu0"144keW.(5IRy0*%>Q4b2-I}\_C*w,qHR%QҚt ka,c0c$lXOKB?7!UI:Qb%rHTgl3 BI 2E՞E$WA ؙ |E>.=ɇKSQ⥄ś8~@]+ RCh=2K7c6&B;v B|6HecvًL`OJ=j\L:%$ ,ꐔWj}Ȱ9 ƌ.: +6A0jJ!BQdI K"n{0L!DpWh\z,c%[0]Nh sV@kOCۊ*eXqQvCRXʃmnU6Hɠ,UL si÷ަ8ֵYNPsAS\Z7CZjّsLL.Y0Jm?A8qm"\cm$PV 8q"zP2W2J@PA4)zvc'Id-t*X$0V(Fd1l҄ZƝ$`AV#f bf"dUG(Mig!*#2 gQd z\EPB]^ ߁QfBlTzC%Ōad\ !̃0(P,LhWRD&)dDgNUE1*ƒ Q  ?%(0 | pȤ`PguYk( 9nLWi_ mJ7eNHSQE#(#  oԚ%@V 2"|!`:M]وPA2ZêQQUDI)eIنd` G/[* H:-dJ1DY DD;0bMZ \ck#& l0t/{tKZKo=f@ I&bZ&D^eH&JHC y&.8T_TeКnuj q+"pt=YUBN~D}% Īx*ΈBʁ`dND0ɰaCJv/'Q~w8qLkH ]{ #X!XB.Do 5AJ-./#W4pQWS@R@"d$ePy(%G&B&B@FKQ Dӎf8ȃ,K^ϐBLr9B&؛Q5 -kVWt5A΃X9)@#)Ǯ0XD$ fr )h& ?AjDݑ2*#+YG"P< c >F,vITdO'ϚZsd?@Z4MY#i%A Rh[VTJ9S eQ{#p7j[ ^H fM-JP0L.ٴ&8 ֡Oy^㦷N X,̻~9׶$v`0uPTtlуЩx1i0 X3A?{ȍ"ayh.E& `ՒG8A=t틺]l$^ɗfrR3<εhbyd5BO;A0 bnh͏f&1A<fo9N ZsEt7<М̗ʹNGŐ5TwA(: 2Pkh5?N(X z&jp'ˁB"1\̃ Y@ގx(4Gu6B)O a ;jEDJUgUc T ճBcˠ1#dІ"&Yv>Ĝ, 6HLpRC먭D>s"8g-J@0cd46O`ײFrh;>pM' TIt*| 6X =q@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Y9 %q@ɒ8 \c@@%,9 )0Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@tFAXx9}{j=*< lDq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@tRْ8 l9_p@s@g:C3'8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8s>YX/mσ'S.ϻKW?}sNANG@8aK(سb%+[B\[B/: l飬W{+D^cW 1]]Z{UDdWcWZdW qQ]\Q j]JmɮЮTlQ3{$BZݮP9ڕ& BNcW(s6uݮPؕ^ ;_]\oK+P+y;R]]yn_VKHm~lt(W3@ݙm'm)WC-LlX~4C款+.*ݸ6ݷѸM(k`&ֽ)΢OIn @:NG_\l2'(LQ_Vp7K^ċC:ybPHc4KHj17etI_V%} f0;WuYX?X?εaT!5<R3%|:dCX_S9]+D}&5oJ%(Xb?+Y P}Puc3}y3h{ߥ7wH #Yc]WȢR[:{LQ&# mrY7YF0U檩Tdu!q9\VRQ.mM6u9ϮR:Īv>U14:(? qɔb􋎻 Vġp7#-:lYtzȯ+ȮPbU+חbWֈ J(g,Ȯno狱+ˋ+T+{oWR]]Iͬ.3}9ubW S ڔ3vrm9vj}%zX) +l\1vrm1CHnWwճؕs^]T-g>u[p]JGcWhWxpaLcW(WcWVޮ@%EWiWΨϏgW r+{툺jGJ]]yn~Y#P1-YNn/!*_'߆\JiTemUI6ixoN<`˞?]uiZ}m4=Rv+IvجxoIvB)]]\K+T] TjȮЮU [%+e)vj\+T YAv=/Ʈ@:-Ԟꀖ*):GR{ +ӪBtQ}+PYJ;-Ȯ@DW(WcWV4dWhWi%UAv]9cW('tR]J)ȮЮ 6cW(WZwBƮѮޗ]9τtȕAPX2ʾmKHv,v幓ICPU8-};!h#){0Hm^=- )]NU*#U:Ɋγ)̻N^YKQT6x@Ks̃ohkY҅r(eh՞n*?Ђz`֫K2]ut+N wS+O4qMe5QJ]=6Bۂ {]]\l)vjE UJCvuv%=g$X|YL7 ՞3Me%zR7a<Ԯ@?90Mu+4\ :C2 +,)Ʈ@'ۓ]S%:2) +˱+k+Tkm Ul{Ovu>veTdW(زb b jnW:giWEAv+Ʈ@}[+Tڕnweϐ.^q=haRZj/%^fP-gboZL]Umqrqi% i&Tm|7߮Ŗ@5IS&h,G,sR9rW6XmV?Pl~Mo'֜zS;uD亩Fºj2f=H_]`o|1vŅޙ'_}]Ji<ڕsY]`+kY)vj=] )dWghWRjuAv=+ƮZ JI'-ȮPpAcW(R T,J#7BCh2_4 9Kx|W/. .^LjnRm Wn>}fAI)I^WV]g9ˮڪĵF)R-4r]2FT^V;Φf[-f7 CabG@ n+Yۛ+ ܛjuë=׫)>f无`EQj_uR>4ҍڋJ[Vǔ wickh/§^܎߬]7B>X9'fyW:Ӷo |c^Na<=+-z"U3(SGhnǫLUߘ%(VYmlTMWW6rdSOEjcΚiy 28"W:YEjBY6"G6mN[CASJjo;u*x?Ah3)("ҊD KkyL5)g6)Er:$bvR{b9X(0_珥{MNو9Ĝt<;l] (K@(׾E ?;|B՗sU |7߾[WmqhJFR 7࡬߮hS:s{|ksW7R]i5,Y6AqkСN{ PtK2:J"073!g.KRY5|TupRlRƣ9d=%ur49_+{BՔoM*lFFcjBsTtF:OPS>(j +}+5M P%ifbfA[WGh͵+ꜘ6!4^dh uZ-\LQ*Eq0BZ(EB# гbx ز4TzTf9N"SԵez:_|  .6_~j7ݣ/|gukm,ߝp:߿lAb"j$#iYNj(Utv&r|;8Jy-/7㽷Lx-W?pf&ɅdUj xx{7~~w/ ;| Rڲ>/wa<),;vw >T0;o|H?/8Yʳώv{x8!'ޔÛÛnh5m_6Obhq=m6 tj~>|̬=v~yUKz| 9B.: ]l QJyrcBX }͕5V-'uRu9Dep$Vd.1$3ZZGY `18Dxxtv =|6qh֡z\PUFG"V9 y*]q\ji?㟀8 w" &\8j%$.l΁CॉG+'Ul՟+,~S-kPт8T2oP/"EA6K-@.E߻k#97yz3%'ٯQ8sD{Eot%>"T<ߔjS{+/sy(n㣋a|wķHA}ſ_/]u߾~-._]qpM&OsDUO3PRgW5Xhvۛ'|zsuOb.s^\亃CQj^)m4)ʢf3J*Se?CFWP ѿ>np4rdtBPgNK:ceelF>Fxk@ U uy7cc~jo>7w}szr(7gUgnC~&jQCDkA ou8gdw0a1\HD@ ƒ('zoY>fcqHD"kyޑZ[$QoDG}d.-$5 љ| Ry;Ei*>0,Tn}1sƳ۝M$U 8FLV9MVJ5E'7g4I`Ps!@28s%EVKxbBMΗhl/שuˠ]I*I[FsP/Gw:^Ha<-Ԫ7L: J V7CE Hd1UJ3;Ac:3BajZbqzFs*!(.sBEmI>Q/Z-(D(y(s.h鏴rUZf枣'I:_Ξ;Z{.8 6[ B7#uj~(P+ӠJVJ-M~Yk/-2;%yGzgYXDpj *f4g؟Q6R k+BT&!C#Hhׂ|N`R$ ZVBhgUDXrZ> ZWdֽ(@>ENHa.(\MbxQȚL^(Ѐ320Ra8%+vз5yK/.ѐ꘢ V"ZL0(焈09}"0[UD.!g(l2í,ig3(ɹ?p:_<_1_o.bF@;Bʽܣ }G?l{}r0HPǾ)n{YfuxC=/`Ŭ=]Q ѾV(ss鰲)v`ZĴ޾ZTITfӻq>&tFVd O[ ,:'^BknKynGe5@ZV=G=QLƣlz'J29 ɡS76c[`y`Xr32mQ 2UNpwzwiM }]!$<Q REȽM&w^Mc0_ssZ/w,~fS/m{Ck!`96oY;HxGo'd\K);Qed(^)!B.5ЅX&{AA u .$ !M JQU<4^͆oY`OoogeӳÞl>[4E1wAY4>Fǘ_a!M)r[8$$,%X2 ̕&(A=3"RFnr`ELB(Y%9%xlKGk.qN )D/m$ qEFcQtp;!bJ|{>|MY@4 ՠlVeEVƈ(A'N(TZ5]{ (>f]_%Q H_:! p/y$6%Z^Bփמ5BN>zlg-J.V?!d>,oQ$cc{ h=rioI y2b)k$dpXK`a׶琌Uq](EyI:mْRAD[=9H(wr@9aTBC(Rp#c; YƑX1 O^)Ge[i˺Jۻd˟7`a<}'Gl啶D tL*HP d6Ӏ_f% JTI렔&FRBҺ3_(%i܊66DtZĿ T*U)hdJ(FTtDup^/V`<Dlf"hb@wY|&} C.:|A m!TJ8Si ΍Y-bP4@Y*` lUHJ V@1"6~D|Q#b{NgͬX\qQ8s({00(zATk˙I%T!X1 OgSu1{G=  w{=wQmkJhuĉ)Hc`v:K;貕\}` ,0L62M2QFxǣ 9#?;GqF<6 r*QbQG.p5%r^JEfU$Xx[]Iz,0 } ia;D&[r\B(uU^T(k.\jaǛ䃛[5!e' aI(;Yv`»q^j{|ƅ9%*}7W- 5eA䞡FϊNWnP6&1G!ڻ)hk& }iM"43w0Zy*//CwɃ6mߓ~4* ՠ-ìsIZxm~zD@u0SWRj(BPZL>fW kZ`52f- (tMavMR5[%H_HWug_=~P*'YfQBEgFe F{uʁ&kb 6mNVG杇sQcϑIiĨ! j?ʯeX\YMt`nin-Eqs$ OƎ>ˍe_`rE7DrP .-}N9P0%cNYH"E*d[sJ@21F E\x'X%A‰АST!p#iMH~3lr@q{?i#8?MZ[㛷4K]4EM5'C̜I'UғZs" m" rx,{$f1^4ld}o%pXo}NcąP~12X)2PV霣%%4PBKh>3%dGL2!BrN5Q$7*x)] \/ϛYl>+7\l{^‡w.} w׳::v*u0O,m֌ZټuN=>M#6%TwS/\Uyn{U0|ۛi·A9ז=/b*)0z=ki.l`+mSI^tCٻKwm͝hkTok:)y1 K D,JJٻ߶ndvc+@>tl@^4-m`F\IN?GO:,ǒ-q,7cڱ|؍{LVf2Yow|8 Y>2ς jdoKtȔ`6 8 Q[)4 drj8'`)۷IjZ+4^I}HB MSx-&_u Q)ycٞ- h@k)JMa)3T-(5sM BRB Ш *5C_&ł#ó;5a])XNRޖ~AѺ̷hs&h5.V]ÁۿҴVM8 qmЌk5: }H[rX DP갎,HՉӖA*pk"03+YB@ A?d5]X« 7:?qRNB'4'I+bŸ0kU9vRyw3׏4Sg)wN?+p*!>V Qʱ:< a'J.±ҹ]ƤϏSVkB.0KҹԁiVr&7^Ѹui9Wc;Fs::V"}Z>H"X;J cɸ7:kEMkXL ӟNo*};43curwo bvN̹^dl]sיP4ruKM@s3J66,HhcjD#XF~j6УEi*[uզRwdTi&#aI0W_Rg;EmcL}/UO^O.wBoN߾ǷN~ىtsxR6(XB22qDň},* \n^2fU9s,;A#'n 2I6aP!@&%mI0wFΚr{^;F˻6^ @51tLf#*t:tAOυz =D(I I*ɬBbHvԂqBP> P5ƽtnwuebԎIfdTRH$Ĩ5r'TʌcšhEqfYki^Xz`L &Sim%̢d0" 4ʣ5[Db9*Ĭsf^s0,@W&9wNлs%"?G{1cO qȦ[Lsv'ouz(5(a'CT`!X,*ܗbZmv`ҘCXldf"rI싺"j"*zrkq!NKp9WHs=唕鱻W7KhF㋓ + fߌ~:p0i4 >3 Hږ| \ɉ;#׏~enjƧѽcb z{O5b}Ѽ  h!md;3Vev bj<~)3'NKsSd|)3ؽOL?SU%WmfW謪*5}\V+|d]P{;+9Mllۆ7ˉonÛLМ!WV$ 76Ved;j̞:aikt*C!DrBXe]02ƌf'N;ڢB/v2izWu秷?|5;sRo[zkਲ਼tDVXJ$9`:5F.Q2ܪ5D40kpy7j!b{w? Ahަ\hx޸}{DaOKߥ{7v#rօ@C1)a|)ͩ&Sd\DΖ$7#d)Xn9ά R-_1fTo=S&՜-M7a8=sz珣۪.}:ontV.|%Wtw)5T r+u%4>;4{ۯgBl#/q\Σ-XOq%R'r}+=_}_*xZOoMֹ ?k΅_o{!VF;9`y͌ǫ}~yZfUZ7}\^Gxrٯet:X^9jo2@OۥgOݟ!I޷&!CtR+ed5< o~H$9sYDqB--rlhЊג/pW_+9%: .{^T|ꔄPYwt_ Lj=:3qsiZѳ7 4Mdn~>R=Wj5)3nk)㚺.(QR9:Ct_&U % QSurE05ސŢOm}>.rv}+*),b* "nuWCrK&XѠ޺w:g3x-k^9a 灡2w< 4Q1C/`FE+=1^;@^OF) xtr }v?$¬%/+& zÝC]K 2Z!-0V'@Z];n_BGU8jsopTRzTO$j>:wVtaƃɍYZe,I[p& e֕מh)"w6zrb k7G*-!ҽ@BehQ/5)ꨢB$6D#ʆET  ,1,AeVCHΈMd*0\fsAg pj[Mxr\ Dvيn[/Q+w9>3]/7.F}꯶0%=Z1$Q䜤g*倚ǔQ@\ )H#dVz 3ڵuJ.%%3f6%2ֹ18-GL2Ee2m+Al=FΚ;USށOFGc oSF]FTȞd @TKaQ%0EtާAJQuhAQ?^*TmlP!32B d0cu_1ة6NaiFz};'0wE#6㏻jDX#A#6/9Y_H` r,J@6jma.ɔc'stFN7ZHcdT=S$FX{k`ZFlFzmWY/NpRU/bc^m6 )ʐ ƺ5&iD^R>Rx/b6fqW}(Cy7} ɧY>mh7 -gmi$IpC#%k;tv)asp˜h7:{z>7w E]>tJXd +ը|Gع|GCb79\ Lt0\t(eATjo@> W⃋"&Oeo(u >&f 9YZV3KIw\9;~?lKs.߾}o:wyZx kVL߾Mzs.lz `lÀW z'[gn}d7ۢL [6uuJ󉙪(1%njV9Hu;d*Jk3[ApKǫ:MR\v`3^ZQfh=v>?N.]R l!Lj{-(JI+ItZ͚gL.7hq\Sm°!H;qڢ&zxP ]=/CGS"hY&;~S+;UAK=/֮t\K+LTQcq]xj[ldJLɄZ~)E-f兰QR(`o1#޽R=/)_%TB#:I3'QB}s5$W$0rŮjm'ˮ*F% (h4I΋Td̉JG'̃63H g:@1l|.IBV.&4)K}2&dkC A.Z5k[&:@ ҢDA0#gT7cBl&ƒ4DXS:cbl,gPzKb&^mʷB6%9?(iYS.D1t @a2l6ti"pLQoARlb!dLJkLsX](j5Ztv4Iݮ#u&%l6F4Ԋl+8A\@(JһAq4Q|v~g*;Y:ة#!zqM: 1s},X|D-A`2AED̿.:~;SQqnb? ^{ IF@. uިJ.!dKI N<`l 6㐻JGb#(AI5V&]ߤo6.ƊHyBlH&͓M/eq vQ4vZ9+ 2R :+U:ݱ5]4J+3koȹQwJyPd6ʜG#r=Д2fQ$g:u)42M Mۭ[;<;vqZq%NY+Z߮6|l>;Vex{.ڣVۧǥYJ-g4:z$t$CGW^W($΁aphu 9(; )f dȶv%JEV$2lVi(%es$ʡL 0BHb JGYX' (_ j{^ExCqr1aB9_~C/?4Ϻ[}֢30pw>$ CzfW*!%Tܘ"0Ń6zNNÊ4&k"2x( ܀oeQ[>G?4`BG>QXɲTgU[ؘ8J(\RJ!9[Pb--C.ERK$E*-)x/ttt>aq͊/.&ZZ- ))eN `c4yg,9 d#uϝ kgZ?+ESϣAX&U R:#*}.-}K" ?Ok|e6$@ :`ɂ>Zv$9'mHFG3ؑƂtQPyh +Xֈ׬ BZ )1Gzǔf54|@?F~-,Vzj<bPx|#:?j6wy4V ?C/KQF?$#]ǩ_ԘτH ǎ@ 콇7H)$$ǣO{3V{w{RU|r Xo&WiO:.[O^m5Uoh?;Fxy_>c}&Nl'2KӃKU }=L֣iZ;/.Xf7c bq9:|v:_Rpd4>r~췘ܣ<~V4yHJ`jOG=춇rTG?QW=+?ΒLj>e}zrJIQUbONEN[:53p<=o?|?^t/g,U$xo~~IY~=_ w/Ťia*/ӏٖ0|_jP#L9q;ͺX5!ZM|BSk1-Ō 6כbܿ^W{)tzu],j|/Uq ԝʮvcG*^G I3b\$EQE볕"%I I`|6̕^a킄"8*v@/aťKr=6ICfMg 6 ]VvI&I۱JH\̒#dL*0^47/*]?=f&ϞW^ɫSqȗ~'G {}~ceYMޏ\8cۣv~뽨D!.SEOvQF1k^z#xEֳy.ONs0K |ܗ`̡ׄ|& 5NbZ~¶9a.z=vtv緣75;zϜ'S4g|<*]w.F̍O-H{kuf`kcv /ݬz\g;C.c-DJ ]qky[ѷsv;\t.Fm1kОD 2P $Nl 10"RS@`hȇd$-3SFcr:䄢ڌ><> YM'}&ԳCbJ޻ H&y)j)k-%Q$R0s~ AvHRn$EDYF1ф}&GVE*DbIFNihO^; 2m]$ 8ak2Vq/A :[B(3B Ij:MShk2)(ZŸE2AiCMBeei΃퐌irĀAԛ7Zm--n z{Ái<_ḤB9N|3]յN3Rub4 Ev(j%bӓk{YsDvr’R5ccIInz:[:2(a*F)M$9*oƂ,6#g=d ]I n)3 C״Pjڤ c1-v9</UZTgdEZ AI)KLjSwJ| )B]r#ǑVuɺMddeq;?-glN\ȈizUczkZʲCc*`J+c( q堼|s-h8d2Q;#{El/ف7_Ծr'&Ȇbh3 W(cS.%RS#SPGa|GH'@kKVP(0.@Vf]O e#5fbZH7 Y3~>+s R6Fb(T"Ĕ~,~\_eP믊vt}i>lDu:̾-trŬ\GxTTͺן.~5Hs8 yR-uts8s?\d?};Քdk`P?k):&Rj5ׄẖd7tdr;vaBA ?E;sai.+/H߸_M#ԝ9yV*eY3~xS@2^g Q=z\ը\AY|8ho6vnmexۿlM23@|ʚv$^ő&Ь88 ZZ'١J?*lϾnUlR}My5{arx03/_؄zp=zo/?W5/jByMai2 j4ׄ휷n1l>Mj"'n}oX5iwg|mo6ftپf{[^ ^xϗK2j*I.!$S gbLks72#-TQ8 ~Ӽw%y/ɳLNx]Grj"ARɢE|$: ADE7)% hsa C{k T=ƞTϧQOya籪˒iIRVx,vЋ[%Rbv1-^:XW b \Jo/^FBn ^5 OeKS 隰f9y&CgHLyTTL4zMR$?ǽ_wbڋ"ȾXl I HH&R-]b劏7-(NtP TR{'欫^}6[uݓ^\Nԧ>v}ؕ1[д*aY2y YJyGmX ;A={M@5Ri#FhDG['927A,w^Gur]~B_Ecxvhs=ָ`#zVT!>oC}LQlGS=mh$/~E۵&#l0?O\ek:\e+50" (z*_+_cY|X+rExDIRWL3xE9ֿ%ЂJ;A1P 1lG\ǂh (dGՖCޞTf1nxJA[pv+վCO#|l.cl]l!=\ \1èGW`s< X*[+;ϮJpupō@ \@ͱUUR $'| l4Gtvqm<-೜az~p= AS/M9c;?(ai xd6`sVtk{=66KB:&*3v8s:Xq?}EnNS ?< ֏QU%_qS\ěa _nQQGgqV;0˪pH*;U `T U_yT͖JЪm~Z[u꣕_'ٯÈmeiӛ<3GO P"YW 9@М߄ q^ٷŇ9t9I5ol|-W=IG[Vn |~yh8"WZjeshZVsler\0t2˶2s.%4ڡ8]9-.]t/fvRm\vq^z8;/8Hah/Ɠ :K/TbH0vPg޾Ugf8I}, Cy3+k K]ML1!B'^]14 rS (j/J9Tw̚DJι+YRm f)ėV|μ^ 󪊽ӽ5aLPz"+P%Z3+a`Axi? ιe"hai)<1 <0>hM?ʇ6iٞx0z%GA[ w3ZZF0Y  ;l-~kaڰGo9p5/uuYQRi'9϶>`J<޵Z0|jrgŚ3bˤ0je3X='X*K ç< %F^Wz?/)|~\O2韬1-')̊Zd5|thf1ߐ4 &Y۟L+ƶ-zdڒ~\U  uPncU "IV EitTK]beZ%Svi/[Ur̚EWiS9q5R``y6,D"(,$J U\&XU76HSC}$&yd Rq ,;ޚ9ˎ/\OFCe}8#ׄZ[*%C= 8MYC92 W|ynW˦)$(as^'\ 49N9Fjs݂KDb1RHp-3V;E!2a HCH}5stSj`?]OA^J'u&z t&r0k ASN$x,o51ԅbjA :GUtir0#b@U6q)*xցe~UM2j% KbI{IH Hbg pd1+!(2N:c9.N{v\LMj KzFJށ gsK>}Vg+O02-x/}><6"iO46^^YT*X\>qD7+~z|:Fh0g *<~ݘ WA"p )X Ygq. ,E[ `R &N &dHfEy:鉶6^@kl`3u֗~R-J]33t/.Z\tZc}~pu|$ 1*uFF&{KeIkL]2GAnLh*p7 h^7*O? g3A>7)t0k?MP &MDj=]^y3lռ{&smΩ'O3% |S,pwYODpJHO7اAٖJʇ ^3PvH&rzpz%M팃`Qf0Y'K$t- Bwk_b7 W$\E)03;iTwTs9$wD\zᲪpkv3>uv~]c\5(-.POuWmVyh5nc{VMT"Q项ZЌw 6u,$DOSg'rF&"q /c(0zɭ!KXB)n (;x\8="__Pe K߀vy Ct ,qb,F/igeGxumgӍ6k{usӍH}ai!2½UAOp(L7IDOȁrf3s+CSR2:s%#gʑ4 [WrCɨ¥ZNksQ!{-ҜnimV=][ɒ0bHpSʤ;%! BcUԅ^]#ߍNVz_bv:#=P AxT"`@wRM1oJp(V2a9€FT< wmQWWdCm;!{֡M+yzrO]9Xd箍[~6'63O(q +ڔ zJ1G t[UxDоQlKJUF,émt$D "6JR-HD  -VGC4m(pTwեZ,@-m=3{ TPf/ȟns] vO..gpyVO_i' ۺdzdW"u UtgsG7bTŸ_تafֻJƍ+*2-sWV֤|̥K+NzP*9şerg/^ ʺ.fGzW&mzjMzʕ]^tݪ}w%{L,c05%aR͍QmQ')CZ=ݒ4GEmw;*^}nQ}F$}o!as#%TA[_G)Gt\9g=W%v*Jwݶ i6sgPY] ~>`¤ewt#*))BX;#ݪ"Ϳ1^z@/`k U cJ#68R$#V?\Tknz=j:p8;Ե4ѮnN4H ~UcJ1rsD x:2@JF/tJÃ&:@WKX狲2<V`FXuM8<|f8g*=@'TXqFZwNHa|xR]^,͂Ĉ+Jƣ,zKTN{S; *펡|LyU(q+YQ)iI*GJ JeX!-} RD\;(˛bZ<$ ZJh6t[.ۼCAΗf:9ʈek}~8cE{)7HB[/@]#>(, 5a{o76J 9y"f `(3*h2:Z y,Mʦ$;A {7g]ԙ-LRFfWh F ^4_,hbՋ;SFaSs3z:Gߟa.-DڙaRJ F2&54U`,qL+)4dIK^xhp MELcm#t_:/Nw˙o-|06aVVl\'Oۯi;s6k`fg6P 35'@TOMy"KK:/or|T6WxQm<vy\t^D®ƷaMRj;#)dRq)K"5(>Mc{1Qs0peʹwZFc "c%c :PC&z,MTb=#QHYu@1k5f,`ZFL&Fs+%"3{&_kM>%aړO<=2"\0V p9k r G)gPNia <* 9N<@]`L+Ű[H58j8K4T/ό.lUHsBRMWM+wW4[n,ܩRT'=%BVy@K%׊JAFdH;0LdB '21RM`RGȬQ(֦bw0 ()&`zKoW12#XP1g&m2*la6. b RdqOdm}Uno/(p8<<~GTS.)b"Jn>(2RJ8FF`PcftVM != (If yIR:h]IbggŶ/Q}ٴcOgV[AbD!a Jk&sC)Z# wI"CZ>P5e<=\H8~dSPG@%r-Y6 #a&f{ؐY5d"fӏm-"l",:Kf@::0&Daӆ R2 S `EU"o"7Bb )DXҠ& \Kir'%f<`\̦%E.]<ŵ5ƃb᷈L8f qIV:@Rmq9#,VȘ.b!.TϥCL<*U= ,s"OO)φFFa|g>4葁Y(J-UTt&A]= J݆= TB_NnUb:0%">afg!AqF*g9/?!ap) 1@Ghb FR;V2G?BZz$ R:E<naߏ.zZbd6.Fc7n_sS:f=]9_}ݻקw߾?D7߽/R>~y;8-nKLJcb&$^"A햊(0 $^&^fzsRqtʜXLĖ&jϔw3gh;Cт]0+f Qt%M6V%*l%$U9\1x*y"mGYQtQthGYuA#Z'-KJ)MB{EFY)脓PAL" rP6^eB/)Gsփ˘f ,^ /Ҝ J0gDCaE,{_u.b_Ϭ }U.L7ŧ9"yGM ^03-1)3bJ1#ġ[VnfXܭ3鈧QPRQqDEpX'szI)[jƑf)K(NdF䜕$lB`IhNC묘8<+&*~|,~n9Ns"hfB 2 ('f=*-KVQoy2, ew9QL `#™OLQAvP 7IzXf}PRJ Fqk5TKQk+82KW!rk:QDqaY9B˖Cy6"LR$6G| 4Dx4O!!nrBᆯ#e *|GZrI*g P,ŧ ה! •`"ic.Ar!ny[S0vTE͐mWGᄅF'x,ϡMI5I(5ȴ !#<dKI^5ԆJʏ7#k'ATYgL%JdVg]mmEkݽȄ|98Z(0339}/(YH)7-MĿ{`2Es&Z0)xk"&DHFE+=Ɨ2UJBzuh"[S@ t>vtt|V8dQ|%[ |M n:ztѢRvoK9[x}GdN&+C#pJQ4VuyxP)YH |2E܁ΥyLʱ91xT)j 1 DV4P͙Brc.!J1p.ਘ8m_n1I% nPÆrn/ɸա1͘}&9]X4Ip"Anl'8]5 E0TAP-x[B ija `On eCoɾas1o-p[h90. ۦ'AMoʏnۄCK W6|<\vk9N yxS I&\dp,KC@]Xz'{ڞysޘGo޶^s> yp3/l m7mNL4<{kp'uCfuT>]5Rb|EM>5`(sw?k&m/aۇgz8{_ِm-N7ndqwLS⬭d 8\kpU1TkYyy%&L ;˞"G٘(Sd)Ru}+RQ3DN x`Ee2L)*$Z?ʣf䮨9sjP21C7DyoifhXF)RYUq)lgìPm*]G[36v޼rEfBm< vU.mν~2w( )ˏfuϭq85N۬X9uز–wjxJ1VG !>ҥiɶvxf釛-1yV 9wQ`sBL,.=Ocwf)ѹ+ t y7yW8>q9%kZ:1;*L/l\!|&+Vz'+Ag6SpOr&)OhhJ 5e˄\Us"IR&AU A$ڿ$PO1&$ƿL+Mx*m|gP )vVGZgQ+b8'uo?Sś`g=PE8H7?% HC} !P 2y;:ӥxRi&Adf9:g:R +XO.',Q(Tp׋QCW4 ]Qي*{>  ࣃ1V[|89O"NY He]0&K1t`M{t`$j35EvPoj8@mkMNx D}qX{Jc㔸(M0RD1:Mcdh3T p2 :>HztLz˒qI=cununQ<Q2bNA` P.TPSڡqɂF؋FsƝb3( d)E2Ԣ.dX, x'{xލɇ ?Vl>rrR N|`H` FcA D^:rF{O;)|ox("vg4K<ΧM#L:y.av:Il@S Y%"A.\NO*/#w8Z U9oLF_^J~)Q?ps;a=&p7N.UM\\e"aWDLAr`|;|w9 &ʔ'eI>h~,Z]p"Zxy Uޯ< yS&Br)WnU|#8쇯sw͔sp{k~ChDlQ\4،qZbGwvOf{HOٍ'}2?qz;^xoGU'7t6 Dt-t͟zۄV6_Gk-q즽> Na}X [r|_oj/zZDcQj~ShE#9ܗeYnpQ͋-5~`r|vMw<%|^MyMLd/}X26!,q)i9 B6658czŝag H55)kJ>R枟t2GV,ۆ{ :)dd6A%ig&nx3nI[o5җI']Kc8?, -&zf4ᣱ u4qZ0)xk"&DHFE7~)|*}>׋4Lr}s_| ˻a>aPcwqk=>՗ci0p=:J)2hL-EsDS5r}C*:Ak,y*"buT[]I:!.ʞtf.a%4֟,{]Xۛ6&1LS䉡Q5HβpRb$F(Q, >-(rN3r@c( r.&D Hp¥P))0 )༷1q,F39r + gWڃ G#)`@?}|ad2"GBdN2yBP(!tEFZ3BMi *Y@!l.y-$<$d$to re$ 8 5q9ešUӓR6/md~vem>E:9mOmri6mPo?U5KJ6g1[`6[Q钆Ǹ]3*srr@Yp דBIc}G3:J8%LNg&vXT4ㆾp//yA1ɳrdwI >O'GR32JS1CV8g)y%p\W&q֭fDFB\& (nS(zQ:cf@ƪRfAjcQ\⵫iǁx훦`j`ژ(Bz{k K+&( To$GbȄ 9"TXPZ@>>p>fY2i'\M%#1G7GG=&)jQ1gfx$CG΢ I%Ba{E# `DFUpƔFI"TV >`0x1ii|8=緉^'8Y${9jZrS"~'%OQJq_v:HAq%<'Vh}\)k_܇_rl&Y!>CI)+D kK/?bmy˛Rx'5„oT@K=@W,?*=UR}YE,>yhfS>A2 Jol$ȬweK_kn]so7Mɵ' R-WEլuCU\:;ncة*uN.1hA)2p``keDJ.L=WנsW7ڞ\%W:)y2 рcLlLm59h_{“'-ϞNtW癆>V_Q_WT*@L좕Lm/mIKbgsE%{-}tϟ4A Y2L=$ҵu:*nw6a rX~ZxvD7,jh.G3`?ڽE:m}wd|R:98^ںL5/¶ ?>6.ǗuLg/ߟ_4 ao̿m|)9a9$l㥷MYBPdǑͼSn^ \R5)Z)TYc"ΙK"+)5Z[}\[{ k(9']ю/GtY7WJrf3 5ޤi%Z_]xU2 F"Tc]M)kr {xUEZ~c՗ޮ@C66p|pkseɶGZJ7u4W[)KbDFq~KVll$Qpҝz8O<N5>'}`.d/R71T`uO'KjPbEEޥs77먜 *5$yI&$#ELQF*j)b晁NIo㝣GRfi4RnGv[,ΠJn+f =+{.M*0  6`hdJAA=QqTm9zc"De&:'Qic$ 7뫧Ԯ(]l^ߟn3ml8f4=wo^*K!76)ǸE+@hsӥwfrDbߐ < >ݽydK{6pޡ;7CPgEw7_S_=^5u{\.ik7_LKF&0ZsGy-"mfGRnԉ;˲j|N1&!CR7oYe2xOSk87 Btq lmBA8m &8g!AHRkN(P(ql&Hi:$"ѲX;m5qvKoz@w`VNINCCS |@n{}n[ٛru&elwnA6!7!hxKu&M! )4YLu(9MMOJ)kxen:bh-EOl@cYN.&c͹&H !)!hT7?BVJ4Jcml8,f'DsoKh^AC-ڜ>zVߋO.Fv:"g;δ qmFG䭎L9f"(Jz# F5eb`-J&el-o%9J!)EG-rJҪKY.+l ߞ,YɇaYz1 '~s)bf=aӒttuHo]q]Ez$~VYVtg,R>b4)k>!"PdpƉ}`Gop&/&Sr6kׅU9X]`,yJt&2Kqi8i=GoNѴϹz}nWߥ:=[o^녞>L;)lGIa9'GaE>:"fMs'ޖ"rKU P\y݃ei.Q9|vU`F%] }9雷rlWGo` R2ΞP}W7ݍݬ*+2ژFg,Xfb1Г>OߟN7=f!zzVz#ұ$8'ipm#>8ko5(74H'khvyN?%u _^'yu'^>/4.F,_oۋ:^N 4, z$hˋEv_YNTiWHrҽ;A75 ŀ[0jЯU5߿^W(}i}l]eb#xc~_E/6h8op9R%/`'aMZ0ɥP! 3f,J:r=GņsiZz w@mWQ hՊyeRS%%/Ĕ<9AH8{:ut&izPjVӉ,\QpލRtgusKXՇOHc$ ֊Bt2G 6(%kJVL왴}ZuwPх!Yx r)Wd GNZÕN(0T~\?|[hSId$!x˘А7=:G\m8WKpC"#PrK뎗\[s۩xo!3k 6CA0ƒD8xJF m,Ro,^CXpog1ϽIٔ` Ȁe#KZ;/gxͽd`U;s,;A"'n 2Y6 aP!@&%mI0w}vVMVV 6(*]>un9?!(HYc!0ADŽa:L'XON7&P>=;=?$+Jj('K<$Q Z A #V5Þ'<ڲnY2Tq#iڌ45̀ ^ #XJq QX;*6˪#,xiZiEĬcֆ@P,f%TFEƤ`8P Y 'b|T7f|Q#;XNOTbVZk93g9B +;U]N9ƒQ<`l:xe?"d<;/ӝ݉ni[iAJ $J" 6`a-{O&=A#oHmh|35i'bn3Ad5^ajKO"EkAZ ?{G v;Rtڗ(餻[!eü ;3p_{Af@ IjrxԄ漠Ap^+ǔBUXI&_½(ANp=39wC +IvW*&<ߖLzvtCrvoE򬻲w&_j^c I̲z,OQ̣v]*ɀ$K1H # r˫t卟4s:C0پfxGzxG.O8~qԂ.bR,$TKrP J/UHa#gzwN| j MQQP7|cR: zm(P_A^ wԶIKAfn]RR/?kJS;OԼj+W3Ӌp*CzM{Ѧ20XޯxAj)5() f{<|FYf\&hg/HNƬZe^rΣ t<"gS)AF'ޱe9w[A f:e4CWڗ@~Sق9T :˾"Ed|tC\|5B""MtZD>Huf!c`#IJnj6Bbp(أoA7\XtL+cC6M$l"6%2 dﳃDKBnT8Ut7eLĨ<SIk7@2R#IEa|ʤᤖ*2C26Nm=}}6d ZY w/iBk ߺͮ8:h&ڌ~>DsȚlX+D9qGqldzOٔϜSFܘȋ+GHY3-T9eg 2)% P͐Db.aN' pqكCnm9gO<`qKNg@(.Raj/v XB-rǮȹGKw:BEVƩ"Z "=u Tvc|^0m*3Uˤpefn1f i0 ֑d.:穇ہ 6&Nq->n]fTIu1lLFq ?>]|ޖ0.6HF F,SL> $h/ .8 ^B6B zԧ5wy1.`z n0A>ӛK_o>M44> ~!=79@fT,1N`RKA.]zMy9h6=l.-vj`Mg2aU>w2CM EjLJЊ][>2E$8Ҁ e3{[I+MDthuΠm A |AE6$ 0h ڴءp[-)(\J0rtghш#wiԋx~Kz[˪/z+@{݆OO;Vkn/=z&i HkuJ)et|~E Fvu kgUr AZ^JC< H\Uwª|8jxypԆTJQ|e[ˮS{l Z`33."^ j&Q`- *cjβRbiPJsaUtaL!HsRc)F ¤Ix N @&%V,r_.FR&!:}&(SA]M쌜{]J+#'-s$Q2H1U/LERU <;1Rh!#cBnKf0$Ť7$HyL2ҽ0E6v]+rt]B3d'| %$!20#Mˎk+a(ƞG`66Qˠ%ph2v̺ q:ǔYlKw;#nFC\11wEkw;Ze Mu3sJEGs ToI8(S}6>,HR)#DP 0!2IG0+dd|{s>ll<(g]шC5"X#^# ˑ3N+OYKED0σd@Hfֈ1m$II D gheIs#,i57\ƒu]sl':^3%_gg\PEbo$0;G?ud[фAG'PJb.׋Ћ;㎇Cѱ>Ӈ[PaM(6'\MY#ms >AecyQ̣6@U{8c;;~y`jSnyb@~NgգRn#J8=\p|5bΏ?J^#MI.4DY )n䬁YăOٷw}7  QP`țD0$ߨDj['[n]/ /xj[h 3mm{yUb凳ww0ykazfz1NZ{HTTc_ʤ*fŲ|_7VW#q5S6z;uh%/~\aPcY!Tٺ:JBIIǾzKFPs_=lfKy@% 0@`i zcFsQwW[hӾv=lko|Q%-6"p!Z`6G&/SCLB CfgLNK1ut|cٞ[8(QVX02l,'@TYù!H!iN TbetEy}!z^dBXG9D8FM늴1&t9|D4D 7ggaV߮Ȥ/xsf\4u-Rp̺- hZݯ# ҃ecet@'!K6ed-)̂^).ذJޜ=x5_xurcӴW-\MC-8?OM6H.&k tsXՉ"T BY1uWbpC0[u#aqQOFsK@3'tN'SRgճVtI]>2Oė2֓+ rmQ49׿-# +O#t8>7JWfk`R]udُOL-gb~u0|H[ufwWe'l\T-o/' WbnA̙4I;-v9R Gß/M~=MW)')6$6ujc7p'2e O=Y98Lo86 ^MJ"$M2R:V>3IF*jj#k^j奚xoöb?::o~,?~}s8أi4?{WHdPvmA2X@c1vc{g Uڲee RlLi; pKRLF`Ta]'Ꮗ`kz3 5i8\SY#}{jX+hڸbC&q!i̜(tX7!jM|B$g!5E v^kJmJNbkBP+k;<@xl.rfsb#?Zx=G⇭e^9GV"EZXIMF2ŕց12d"0:z^dbԒ;d#mF3Q\^`tCCLӞCpSLg..oYmM }=y7]W!i -X̊kJ!:2GM4m@)֔Í 33e.D(D!:n"$&fLފlR 3EJ GNY$Z`f:nCf )DJ\ HO-cV#X^@?vu5qv(F6OdDTas],` &gۭrM!)I ^ h("Sa}s A Fe%O S:&` 27d&Idtà5UꥬP`x3 ,҂O;\Q'F֢BsU j' <ۼeJ,3qmFWR!Fch$(,wR8(-aY5BCe6]W Q_zf@ђR'^0$ _Gd+airЃM%jKdOk.Fv@*Ĭ p,@W%;gj.cɄ`W=S}r}U_rz>˗Y_4f^@` 6%~CCG~Q R_ )8rcbTCkՖ犘}w', }@tɔNbKnN dCR"Y (,'ʀB+'\fBm+-r0P,hD{ΕjAG In"WҬ:i2mJ>%eRRd.s.{g3^Vg2+]áj^`nrn-(4|LmDs |rՏ8Ġ [AFo 4A2?79LE妻њ+ 6h.72qwoE :OS,h9{u0}(fkbAk9rN^' U1f}RL+m3j8hQN͓r1V%ہepFx+`y"S< A+QLlϫ×[P?k+Y4:]u Q(c#2W*+yI2QRYuS]#M7h ` Z{휷)֓gGҔ0@1#Qn ^^zB]!1 1M ,Baʙ;fz 4ٹlEPEnTrڏ)z5t꧎c”搚 ,?= nb44b`8eJHӬȀ@^o+u1ΠD@f֚uv.sc(>A#dtcyukA|{ewo~W~v4cztrza#\NO?Nx(_~x +oaRJL3IoȄzn7Ĺ~[̓αF/TJ* e9]ɵ[Лμcg &zZwɤ܈6tnq_oG^j?][b4ߏ/ߏ'/?md:3o Fhvy9J=dF468vDqvkQKtWÿ=1BuE96 x>T8F9 Dـc΋ek`hK-?hT!uYpVĔ$dQzO?kɉ;8&r խv)Kq4) :"UY)_Xm3d `{n1dὡaa2N&vw o-mfJ+lX ,S^!<|WoS{"-X^} Ňr\zt}S[nBsؔnp $nӌ`,^0@!bHibPpSʈR}:;!.8NfA+1bZU}d):ý}FWY9Rj;X x14ll/uѧŇ;mx;\֗L_k_7i4p:ə *0%u9:`s0hD^8o@,GT_e$T%G2+A̅He V1ܻМ~nJ}{Fx7W9V6͜}9\_prsWJxTY9(32aFN )8mˌ(MV 5w 1 t hL}BкHV"pK\HJ8|>dY]z M|/ >s>Ϳ<ҫ)!a*V_4b{Rk':r91<`z֚4˃ar>R9/CpLƙ@T [ 9idR2T<p\僥2GOl9r">jhFhC@`̅ 8+t6ML27J*MhFgw5CCۄ(~ Y /0EA{YjRp3l)oYN a=#$3o r%}&F/5rs4\&Zf|3WZF_c_x}"Ff˲6HEs+G$59RSVFmV$}*opvoWOn;8ͻd}f/5D4fXt5)p F?.E4j;2)x8 VfYbfwKKdrmz/_'-prlLL]{9)6 "sF.9mO|SB DZr1~lMvH#Yt_& ?y˅i6чr`yᛉfpf}rӹ Mp%kDٙDeD6,$b(\ 1P9m<1 ǥ|ڳ!"*U8NrtM!aRA%dATzhW#$[k`mH<҇ɖVҊ-[z;P^fהct`\<. B13 1Ex re2!W=wszy}Ǐ28h r´Pu#ZEWs z3յ_}9ܫuS7󖁻>w՜Z|jlGWH+U[|)A->^aEYu$"1WEӑH Hs͕s2W$٘"\R"倮^2Y23|Us1W$-黹*RJ9Whwm$W nn=o ?,I>$_֓cPdAjRGVK$["SqUveEY?1[>\ώ;g.Ӈ|M.~L5DcgUNRwmu3QXDݫo;;C3x2N'y>lQW@Uf_d,&]RP:Mљk/:O8 *kc᜽Nw!g!*? 4w⍩E ҵI.軶@fLZFW[kn9[oE7qT@/RGdOM eq2we OȜwr]Is3b[]:9SR+z̷j#ca\ ]J^}12|~~եp0۟uJ|1zč< zx安*xs""H{W)趔=W,j|V2sKɷ5'k/f"K"^zw~#UףG a1bWĮ%k]`&IcT0-:k>9IO_i^f$|OoGS_/5gԴWCMVx-c:ԄPm1fmmOo?n-"?_e8R&&n~qV|a_*E reu|WYyiF796/cWs wr\+?e pސ]~>KדO..bukJ[<Ǽ ௔߿s 3kE<ʝwܹ;7gךyg=;/wQN^yG8y Z׷Hhj6jo\7U5U50L-X$DsՆ|^{66DPJ[πXjדJ=ڽUCݮPtbJ]IVho誇ZꪡtHWWbJ%g3]m%ҕfϿↀ}]ҋ]@2^T{DW ٟT{CUC@W/WF]5\7sW@k7 :K+}2F]WW >4/{Wt]5luj(W649ˡ+/QO5!>6X#㭄TQj T5MGBہ&x3;:m 7-gc3cM0m@tm]OBY=+v ]5}fhfUC@W/W~t{ BW NW@Ձ^])Oҕj˯WѾUCkUC䁮^ ]iUllpިP]Dr'rdH]{CW@kvAWooP6E>\d`Gc&c^O~>N?/w"\'^ݻwh?b"2,j~GmyP+zQ3t 2CzGo~BA8^.cl0.gjwOY}ڏlyԤYX9T"w81+[>̷{[H蛯#U2"ݖ/bvyr|4x\^~|Aw'Vrl"Ȋye,Y:.j-ls7J~ڞo?-Ϻ\>]( u]ɉQ2;VVG a&Дgc劻:?輌!jDLH!)%B iq!,SV*IW' iwCF0HCncotU6]"# ϥ'C8 {[d"dt chg+%,82R2bNpG]HbٻJf 57Ѻ( ZJoP,jLQ*i>;e5ɨ%e.CzDh){eXzB 7V)-!!1I6*&YkPtPI'QCegk@DY].6.ʒL!RnA:2BiM2#: bpF e*B]MŘIKm,;oۦ\QG@x4bzK> 4! vdFZ%Q>c9F- :vXAԲ*%Vme)!((%)PEUE$W= %v:rH<&&Oº;BQ:p&qu~`k)*!50֣4y3&Xac8QKvЗⓎQ)Sl.xG YE>%5j]L*%Zdl႗mѝr'B!(0ftYD!.0%AUHTGhE#Um)2]( F[c.i25K(_@[Z yVT*Ê^(7j0:lF)q2`, &R9 [U(oSD%lL hEZ:;-k=cu 'FWB XaҸ1kQ s1r(CE˄Hظ\Qs( JW2F@PLi,ٍ JpT`P iiٞ`&<ֶ :51ḠJ(EP"O)BeZYeHLYd@1K! YUJ+C)kfH57]\R̨F 14y11*  Ȅ6сH}dVkHQS}kҁ A"g#v;}1h\v)VI!0Όf  0XPVʤ|% !lkVJ (u6F(J8E5n,ӣZV Rl mrA!NuDC2}we#Js(ƸGU]% SIنO`  G/Y,QviP"Z(% Pe1!Y a%PmB׊XpR[4aXȠd Ҭ_hVKo3fU{':J'RvH'"4B&}fEliLХ釶r[!~*1AT`-d1^,*4i-0dnAҥ`s@h)CMd_N"qq %XצXktdK葨wPBfx Ũ}(T%ow}9:#d &)ڽ,CJJq Ў*!v Fd z b@2]1|!JZHqXoܪךa&VcXQW$% s%>HHb9f?$/(pF/:vB% EQIgzF\f"MYn曏}@ye1^fs^\H'A^/8puQ5^iXw/>vfvf| ʶLc9-rs)fppJ?UrjGx*bE=%Pn FCmK%dZ0> }@b> }@b> }@b> }@b> }@b> }@cYȽ P;2{Y[H~? T}@#i-|sX}@b> }@b> }@b> }@b> }@b> }@bWR; 8BjCHxK}@b> }@b> }@b> }@b> }@b> }@b4VvW#!B> +U1yԑ}@ca> }@b> }@b> }@b> }@b> }@b> }@c}9jO'{[bMwjn:@YY]n@*%6m ZIƶjI޶*flKy8+ +#\ EBw[#w GGW 7H2B&yS8q5B\+\\i զ5,F+s7^v(r3N:]mⶄPvjT?f{j wsͥQWz*T2/Pn拶mٺXC]w vYOlYP1P8]w}ZoXuTo5k'donǔIsrA[^]ʷ@li̬26oL+ .;uTQN\.{Wʓ P3W.z([OeڐZ7 jp뚿Cnaҧ^˿"ʪ,jTnɫTa>ߣ*9|J~+lgAXH3M%b^2B8QoQ%Q?k\YW5!72:,1թD'A&d,,Nq&ru"<,EY])cQ%l{֮Pөw-J/kZPOz5AyU/A =OT⪟ZU?RV VKW X@W(9*BQ+PT`\W*=!\`iDW(גGW2Zq&vBU+7RU:u\JW#ĕ;"}\`K'D PmT Tzqe#GW(҉PTpj}HW2 qbRt+k5\Z/R q5B\avp \c զ]J/9#:P6Ntr պPegq5"\)-:UbT:PԊA~bDH! COnbZ%R&6Ġ{`Z3[R*o !\(  ʘ:PV ^SV BBNtr;A(\Z#Rq5B\iT pVdpr ڐ:PejnYpep/9MWFIqU P U:qeR2dpr}+P Ur28J\9tcɵdFQmH UFϸ!RRB. Pm UF0F\m4 {:}W W IWV%Վ* jJJ M>r*TI2}8ب@tVniTk*dL=D= l"ի`A*~+WgudOܵ-Aʻ$kIU,Mzղ\WɎ7K{0ĨLFS^O#OOOl#6b?NB Pmԩ TW#ĕ2JN8|\7wԚ:P׌J[aI #<P% ZoSLm qYpeV  Pn$ Z-TBq5F\Ykcpeΐ @֋'+T`\WΆ$!\(-\\ j:Peg&jj 1S)۠e\=.4!\GDzո~j5\ jRc @W(7*Vq*f\W:VO%Z:B^Q'҉!L0ʱVy2BLtjMW9qeW" 6t+;-AT jr!ZM ; z?CMR[q壒J P|SBB12F+tIJ}W(әwrvTcGLl~l~d 5~jDyߣF)lѫ\Z7O\`~[8/0Og^-tJf^ʹ;4jzn0W ?K[wΠy,*Q@9B] (Gz2.TZ@Īw)1/rO;;ڡTĦŸrV=8 JBxOqTpjmHW2InjJ k!+lOTpjJWR qeF PnTpjM ]WV ( - CW%+T^茫IHt\/tڡv22ƈ+/mtpΒ @:u\JF M 2"+T-**}yA cAS B.ATٙz˸d)3FLHevRL`^z2?jfԚSiˁ}J{s^$+l<\\ 5OepJi%"+-\Vq*S˸: 2 vt+$6q*PhI @W(w~jKW2Hƈ+ks@|I~r+PdB0F+gBԔ+ @&t& |2*e\WJ!\`\\:#6$+Tydp JM W( {KJ?B1**Nw3!*X-JHyHי ;Iqu\wW-ǎ?k9Mz(F;}JT)%K{^թ;Q:dփE6Ai1 &4Όrҵj L:FFrwӵXa"-][70u{$ Y>-\ VtVHW X+IW(pjKWZq!\ႼqO}FDBFxpRt]ܡw%]Ԗe\W[7\H'JB'2%yWg^yJudPN]ǸdVR2RUdpjӏP1c$ {:Pn wjH~" W!sL}:\`B}W o.~:\ZR<28F\El"\T(^xЁڣ֋Nm"Vm*u^Xw`cbS]ƛ~ 4M?]ߧyx Y4W@?ۓў_[Owz+_~.X,jC_u/zt)]GеmKͲ^́t*4z8zGU~A,:*j }K 1kȸPmQ%$DݭO^%|[>n.-ciOM*\^J|^Rܝt,b]sXfU6fJ_xǫk;X:%:[u3W-ΜnV\ ^\ !8fqA© be Q,<籪6lUU9Cefe(릘GmI*jj*b&lu`9rOҢrx|HɆd,q$>(e] m E.:T'JZ3<آyTOp,wqH eq@Z"CɮA4̌8Wygd#6`Y'Az،*G+[p5q[Vx4v_]{5[=ߎ&y̨N' ˡf6Bw|gz'khVq1Z:q'mHla5aҝ07*0b³f_5D'q7UP*䉡IIFpU4e!2{B$RxdO( 'KC"gTiQ0KsAω)K.5Q `C$EHчด!\}&(SA2& N}DtnG`= U%ecd^2* <;1R+i ÔJT5T8`ND 22R lloxQ61R7r%p1Lm') l[0gI ,tJkYq1wiDr~mOO%@6,T{{g[?ȿ}L#%6X0I:dd$.y0%dFGY8r"X0U=&C`+mmʂ65J lLk9WgrR=c5q{zX/UKʽMo 2>.bNkxwO85>}Lr-PWNV>DH+IyǼ% ems]U-Ȓ= GD-ɀK&ce$9bf8=v <]M;kv`;4DT#xΥ$;Y* f0Q~;!UU6>$LȐ@JE RQ)"j4p5qÖԏ>x|8cWGܥ!ir 1+2tYryE,Y3`21$R`uU,gLI$pAB.~&D6Fb57\rueXM[G/NqS/Be_. 9Q e)uRď X.̵vCcմPT~.l&&SnȽg 4Y$>qW;\#0zcrqq9!Lt>3?*;hSw7لdbVz3Q;&q= q;wܘȕǔ#`% Eg4f'`6jEb.5W6N:{p۲mC*-:yw!I/Wf^; 6%~=ǔx]f[Mp~yksiߕuI,&}TOԳ `flALW iAmų!9Y:]obm#M.Re\eK U>n]7MipV<~tVjՈٱ,puKւgq~ةԞ@k*r``S>&"Z0TC{gGALw. yh{ DFlu dǧ]YYƇRP Z ch,S4KB#ȒT@8_1{,{:X:HBA:d5Jgm!I%0E9LٗSai% !sCʂ2{D&N`]xކK3<'Grȼ8wdKozDs`/9VѵHrIhhg5X/Obl`yeT!*ôi' )O1[!*cy2MRZޢegKDǔΓ2u,QdQEBkhTK }0rLBOw&Z`>pa+/ i0rq'ĬG9{1ŗt$d.h 4-JW~Y J!  O%븚3x#:* ǟL# w3ܮL{r`w dt$y݊<*HTHEʠe9 & lS?Wgs[z4Y$@WFktIJ8LH6,qsֈу[wU?׸² oBkw5Pې}Ts\o6i[u C'zKQ6hA@R'I V5[HT&hzlk'm=#09c98+" \D%J15Vj)K1]w`3!+) ^r28&~pD>q\߿P4ЅfQ\hca†s^gL;#W:T%P1RDdܭK*L@$3 \JGe1ܐ#{bsݴz=<@,g '[(_y,$ǀ$ 8'V ^?_VS|}OdKG?=L`.O#w?n@q#?|0iV^ä+ bJvJ (|akg!Βwg^ыo|>h郲ܾFZ6myMJvI{6ifLV[aVu6iQ`bŃZ\;ӛp2sg4O? fKؗ;dr{j{;tYd@84/E-Dx0 9cFPn8b /5&)j0Q00R>'=xG8Tz*YǢt[YC& KT*)rDEL3G"DQĵ"*QKyA-/Y||{j)ڤDlߗivw]z}g''$PYjPKfzw4ɞNmv.l; \qnwMg=|NVZ0v;ouw;޸atLO5k%IrtLdvqlZq'x>OhE|˹'LO3NOu"m\o Uv$ީ ['Lu-4W0H`%FmCT̮n/|{3};:0"V-_ ۵`8x3m+"[N_ʶ֑I#]w h9̺!M8fbYGëOƼ9:G{Mvut;kF2){\HXu4 FyP)?,??JSM:?.˃x-_ߗo=x}o9Woi c)%%sᗇ֜ {c[[I˰.rh~ar2F(ֆKgaB1H{׳ λjMbB{0k>a\krǸ=fe:.¯eb#xSW)ԗT;&"c 'Ep58I2C#Jh%Lf!8!鱃 Ҵ1.Zq<3mKmiKdd~R5'j~OΞ7o58R9wߝKXէ4Je{>t?}now\v OJ08/Q:_,@XFt :RS~8]~>{'\M 7M97Nff'?iꋟ./!GOFMrr~N݇jVur4AUNO7oCG$n٥;o-vL=i~h|_~SSy(LɄgNp6V V`J͝\C~$M݄{ؼ?rܝte=)65kJ60\>&~duQ$`kc.&? ?ҡY^M)~^2:̟5UfW83L.3EYqjH#O7O8mwp~1|=CUX$A޸!鉃6i3lisrGNp'liwYeű'md +5]߹YFǏOVMkz-~~໾jW0@ o(ZX+k,Qm>n,9'ɓP)6FvV}(~D妸:!ڌr!,-SG|N(L1@AVFA"Ykc!+6L\kpVswY;H^{I^lׂ[ iBH5Qycu&TD|7QP(pX!sS[.;D2R@˘Ε1u:0/ w6@$γ 3Qcg¡򜡛蔣->\ ]b.eAi]pyi[1"7֍|@| $&5K9l@7l"iK&997QGͿf0iwG'?\#[wLа?ڋx tE(cJnܢ|v ӔڜIgF[";k[p:?Eut2uGo ?hʩ4[B wY|ۂoJf7yy[,:T:~} Af?kz8A-fx:#i5o@'__6!ԋru/)eKrY|5Dqa ߏf>Scmpէز\ޛbB+:2Pc o[%) .zڑۡw: X9h+>s=zЦGtE ]5pm_:uTctut%=+ڭ̶]+B 0ҕTY#0a{CWENtEh;]5(@WHW}Z"/̨-"7kWV3xtFR;5/tEhԎ(-g] ]ᓷO ;菺"7оVPL]]o_Ԝv+vڢ+{CWɮӕ1b(@WCW "`ڽt_ +Big8_AC/Vʭp+Cuk>%F$l%ҒVɵEmHXgfOݳ׏ސwr ח-o]bOx` yS7\^z ds=^jm>;@(̯mҙ~&ۅ޾+jroQ/oQscy|]zBl-/CϷ#;lbHST>e_I6~{ܲ|7T=|ݧuۧa}. O^Fi].޾;nE+ů '4G ܠv?`Z&iaQ7>`n>>xjuq}8=@os17>+ lU\j ?dII=#lBPkmN7S/Ub UgKx)߼÷>/7{HE|}}>]M0__K  Z^YWv)8J XURvִY9K|WeE7A[ ɔ\HnUK%hLZ^g\ȥ4S5bæi|S@ 7QC5JXstj!]6yCюzs&LDK)je"9цsPU,QVɖVIum0v4GIhSt@ (Xùڊ> V2\\הR#%$B"s_KwAh!͐U07T3F+PtP1ZrV>G@#5ؚ}Yߙ{nmS"AnYʒ&dKX*0B͐|v,b{xnc0j!SwUa}pxĨt # AuqI%z i竷}sykU 1WmhA֩ X2|,Rw|Bys欪:^yr}4xNuujaST{Hdt=]wF8'֤{ur3reߠg̵1| <6t&$7'XKQ,KBJcuߺq)hK -F`Ok;k]VIC2j(UÆFs`J"s ]0άVz"ZGuiE餍U(P!D`9R(- O9@EE; }ZqDg()f %JJo*|_A2Ŕ ǖȭlLE"n%l SKȆ4<ֺ2a>K+5Rl{ XU$XҼ1Q [(2rAEF@n7((ʡ7fb<5:\vAjc4)@!}EQ$x%4'X(FJ(]鬇-PUlA1n2$Xm#d*zSJi/j3ki@m 6Yo]1s@ JAU.1a RLANE΂G iBz(11RPkN a^̑]Q r 6PlC@WJo7XѦb:S %t4GQFh֙%@ Q 2|W ()4X|A)AIr"1YUDI)bug(`Wz,I KV4:)1h! Pe 1P #qʰI&P}E׊X{B}f; ^-=,ĥj*A BqǘAQRC:iFe9;_0xbYϮכ~2_C/xPV۾ L0B6#&2 b{PT8xiTA¼xd6Q t56b ڸLcuLC=GM`]@EE ]ڃJH&5*2R,zGpK`)`Ѽ &(!L_15lm'BTm:XtuaV% 9աQk4=!VwNu2<&[WB(NCO5|sdvsA]D]eb:\9)> ]{ #AˈzuC.MAy1k&! H+28Œbk ) x0 M"<ѳV W nk 9%bkVcCXDA5DC7HX] @N%m1Y5(+VF/Cσ'"W<`2Бx}6LE1YIbeҪ*QZ|MC@Q51GGDޙ7ʰ*+ᠻFRC۶55KA2Zu@ `5!uf15MV]#YA jfᭇ+WPk4CoMu^I"`!-vH¦lpݢ> f\%C渴˾L XWy>গ+e,|ϖôE?p$;j x@^,4zV6 `_{O(jq(XuuLk昴59%knĘv3@99˘jֳʌ4IMGއ I'? 9k9j~Bg ڛR;4Κr5wt `-.hH H*@z|!(!=:t}V=n >v`E]1("^S:Ԇ&@u?;:YwP ˪ GʅPmQERGq3*nB lU?<66Tǀuml"siG zYTAZQkԦ1t&1jGb+Z{Ok[r3FwTm >@|5@RMwUlY;tڠ@4XAҬ ֚&-mh-\jԋ+Bj{@SQ AO8I{̚mh=mGPJ↕$5󻉷 NCi1[.ՂʠҘ" kEЬtԊБpԅ]0qfd JkBE;:w뉏XD;j0BI>z R DŽ^|.TܲZmvN6(I%3/ALqYZKχFIqs{4?xڛjK1KЄYR,j Nǰ;s_ׁߞG,vZANygg|>ϽY.vCz?nw/6/k9͛?g,}l6듗/5Ih~ɟO|ZzVkǽW;,~q*~j{xvޞo)Ƹf=s@{ ]"^:!Tt炦 WPH#> H|@$> H|@$> H|@$> H|@$> H|@$> H|@sQ|@a}D> .h|@@kӓ{Y?KۻoX%> H|@$> H|@$> H|@$> H|@$> H|@$> H|@Пcpܔ]Oɉ9bw L|@$> H|@$> H|@$> H|@$> H|@$> H|@$>?((i \>yP3%$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@z.>OODwXO~SMyۓS]{oY<S8"vs"E6S-1J}-}&nrXZc %dA̞,rl+n 3K^w쉲TojJ{RĄ_P䝫Hw?|d 7gہm//ܻ\qi@E{ٕ/2 MF]}Xꬿ?t}xjHB4*u{_ (u07]^#:~is~}:? x-be~|˱Ծt|zl/*W5lbj枀,=#=G;7:bˮi,Dp׈8U^VyUr9jGjU[hYc4E"xXY>0PKkmYRN~rK6Ѫ! [de_nKP=.,1e̓T*5_% C[Z` ZZͮiZ0ص`dC^6GDW 8أ+)zt(uzte4GDW BWۧNWfzteu0]1t< ?<qO4ϒȨWF}tŀ:";NOBWϐrA{h6#]1䏅ZQ t|1+\t [OBWχj=&uŀwI_.~e{AfA8FQ<߯zٖM[Y&͏U_WuWWNQBsWRSHW3E s֝+;3whEeOWIWYZ : 22d EalҵwZ>cirIik;c+9;Dzrh߳1ᘤst`;%CiL[\;|`Đ.FEG*$LΜ.r֟(c_v_[xNr*xϲ>b'5$?͖JkEq&g4ؿ^E|6;,>ť7G+C|s4bVsUGé'ՙˣuhB50l)Bw98{V3)_%%g9+>ZTMώJ_WGeה]Q4mhG 59*)b[p|newblb]VEӪ_U=^HnO{ К=Ҋvѕ؁DOWzCt%S+;CWV+5CtŔyM•+thm;]1Le{:@2:DWCtp% ]!ZNWRYS=03p9 ]!ZENWRVRݱ*+thm+DdOWHWJ[. `Alg 2~Q޺:D҆!nw2 Z#NW!ҕ f!Bvuh-i;]# z:PB[d")RsQݺ~^?ڿ$Oc1b#w>/@ppgU&Ӫw!.1 $C}ypFsҥF,4phuaDi8/gxzyLU|+rt\gkx'vC)Z A@Wv=T Ne+jQ+th %#+9H N:CWWԺ+n5!CtU+.V>y1|xw |UiŅWo_ϮVl19}m*J<訙a,̹R A8(T 1J@?fߢ~4 5ՒWem)ݺ?$kG?/U}7Jx;7sXc1 H2W1lr֨a8 +u^sM[{rQd5F?+ďN)F]iIx)Uo|J߇Օ(VHy!24j) !7c/MM׫q:a y~J~IxOȧ.oTƣO)n+RU \? xA8PN|1^[|(?Ֆ?Oi}ok }k-v&a\ OU].o5[l2<9!B28˥#.4scNh 14Mbن02e1]Lyv!IC1"aFE~-chejr($ªŷt;eP ̈l Kܷ)1X.NGǩ%̍J<9]!ܒHqP*ɓkT9W`' rxcļ0KМIP$̫舧S)R @ANPXJxmu瘈:[Zu_2vlJOqj+`Ђm ~si]ͧruj1U2uLUR̲SP]p|qUhVv/7GSMlY3/-~ p}Vbf0 #L8 >Aǫ<8t ֽqUk!t6;/u)GU '&KJJƾzm ңc=8 / C|hi6RpC,#R=dt(?Jinw]u:~t>C%*37v"B6jD*cL83ƹt["ɥ:悪\`P@=Ƙ35i ϛs5wZd 03n'Y y>CNVB{v΍Ooj]+SA7}d(ek+TM>(OOy40pHD:3yh;zSi&m_?ToեUVt.Dm_*қB1%U&_L?Vz|;&VIo0whr2N'˿եl)3 aoDEQAr>E<&E'@`6($' :4АZPpz5.sImzӢLLo.;/C+YxW)&QX{a8VQ.*]ǘx Z|G^*֋鲺?f"o<S+W@}0kz?&"]voqb[ - ;l~~Aݱ%u cζԼBGnW-Y3ܷmpTIn85cSށ]w<ϡM+Y/bG6- ,~5iSL@Ch*le[]-4 辔Y:o,ad(S 6).]zH-g=MCeKWp>],fzY!by0hzK-K-뼟wTewY#=BLLYU}E+}ªeic gj:Ss@)|y%pىZ^KoS7&` U*\(1)pϸ42q^,B}yB -5q2pD@wބIBg!3B&MxZqɆ@ϭV]V#:YJQ8"c\g(eI#,zaS@p8R( GszHvR悷yHF%R6!Vd'Y#V}H_ryuUW8厃ɡZ<J/(9 QV@A)"8eGNb% E 4x#w($*5҉" j1rv j<6ieqqq}(Gg&c*_[}}oo"OΚc]D k&VQ3J+-fJWAD9E + A5L۔ UT3JRɹ|ъLfQ@}B2:%nU Jf)&TR5c1rvkrX.,ԅ|tuo/W.32B>li;׵l^AeGɇѰ}&0 6])\2$HRḥAZ'b5b8*I)֔-0Y سCx M&IvD12WtRn2+&hbq ZY{ ȨDC:mLH@2&19A(It_zh!#2h"B%x 4%B9Qg٭~q;+Cшֈ׈F)]$O $P9%xcd>GKJhE YtZՈFSBhGQT!;qyͣmޣ%&r95⇷_CX/rSW/Ba^Q21S&.=wrO:r^ G8V!D4:}Ń~R;Ya}v$\lH6{No8aME;~|G.@~l.Ά5b&xm@:* ؊S+JKiB5*x;z!2ޑ\#=xGz*婰1$1I" ^:m 8a%IYI|9WT:Y0|o@7hdָ-_[i$Kt)rnSڴ@mg>~m.:Q/^oK&}Dz]Yz]0m:UG«qmŻA9zq?K)w6邯D]qa} Tl>Ra~`}?[W)s͢4B~3 C?uẙ{UQ4kĖВ f:祎sFk;0sevJq09ƥ*E `@JS h2#S3\ 3sxC >ެI8;8f"֞m7aɄJQ܄iQAզe|)#C wJU\%$nSf@}{ѷGO>=i::{̝LKAȍ *@1!P%COLGʺ̉t B)jc$0p8smq)@C|OS0 ̋sKaMs)V2fJhM|+P!֍[JxfbH4 BP,9|ҌQΘ0BX6(K87Z9'[[Bh )oQT DibM^x\ҥsh#Ҽ᪾^XIXFy :*l9ßB"E Lj1IeMugCŗ{ًv΅И})2^M??yeח b'̩;Lt]Neן~xb.ܝR$ш32IRGQ6rlk,M3NZٟ~и]ݾùh@*|IBMM WeKn4anu2q7h;??9[ xQl3V4h˩3$T{ly2qp=.I͆`꩐f\dHmMgJT)V"8w;-k`Qx!J'LM6 %$ <ǛK!@pN$& #h[H=پps{Kq׀b~B{ˀ|I-rB6w.Pf1|j4&C3 sYO_LƑɹEˋE,N ^!;'A|Rkf}+rz>ZATlGm`,O7,Y7!O5!ۿ؜jnyك嫘d>vBEFL YߞN78o c!?ˑ]+sw^)&WjՒhx~09Bxڰ6(6ds7Oq|o#RܲrUnJZ36 =EۻνȽ!);:Vغsm{\qe^m_ͦClYaEwmy[r;?L'--nEmxVgs穚>s/{e[.Ys1E8Hg(lM0_'=v[frvZc/qӹS&-H:pk$Vzp*ZD|d)Pˈ9܊q^*?j|Wh@7ej `sAqUn\x %"Z(vU|gxDZ=s9m۰"zÓeL>-Ĕi5o;T%؀0=? - :kYJPe⸏D5pG }#P2b #7%3c"؀10q_Lv(f׌]3;-~|5t}_'sv|Eߪ;] (b|u9o8|s?Nst)#0L&fЁB`iѪ8P}>uGT!2(h]3%ٺnt}TPőj#W%ol$F:jC.rjY71/qx,70* z3XӼm]kE[Ul1RP$~hx][fnv92O?_Κ/;½q$:GbH]@0Rt,QhCOOz͘lQk8!>.d$,}:e4j1j.t*'l8n-PW?柯_:}S <}8Ew`̃I!eywo/z;57bXXC؛ϗ_úͥH>ٸFcF1 {my] IUX,|"ew1g[kd)weR? &@K=ҍ < ,Y/GhN6 30y/`NpLT*4 Dx < Ro#}mgBT|:DÁgWI$VʝE$d$NIʉBK CLSLˋCI$^'-=r(dHY~3Va;#$UPR+9>H]o9W{N( ٛ}d5H>Iv߯jɲcٲԲ$$V5z( &) ǘˆl Frۦsm0+zHƈhŲ]%VфCUS692s b3`BIAEYKZ)bY+$o',2uۚΛe?? ,y.(Y lf|h1>H2ٹÃbBzY)crJa" &!QdelS2V$/[Yk⬩g絓y YtP6-=-E$OVqASFd*"QA&$ip@;hSoz g -ģN* "1!*$E+uBsUGZZ8tvRg'I ^P+LhuqvY,?+7ǎYÁ廬us\͊[ %$q\D2,`’eOKWSog1v,/"`xam94eNϘ I1G/cDbM!NK:hKJ@Й%``GҔb-D@1#tQ3`ƭb؀f)˭0 )2+g'm*rC7ZHiJ)7֫a<hՌ%ndA&-XƗ{ 'X:8 tofZ$ ܁{r}xNP iʍG_QK@ce. |W!T1ktG^dy@&dLȲ)2Y_ā@ȸ[dvF/jB:(9- d1r)1nqbF UX ґM RD ܣhY[a%^ gTv,qJެA^2JM1Y`V]_oVyiֳ[b{Џ,fӢDh^Ha0TmXFy@+aUAT,ӿx[Y[ D=Ȁ?qw־IwXYY.m(2ft4hIMq#u/DrTlNR*YV*G.22JZ5rҲ5|tsOGj26il2R\Tp*Vgudkk%j b&zm8 ?.>Vo; ߞ.F"K<*UL6[^^OeVf a'Pr;`a@|uN&eJHl.ESBk̨H(\Ǥ) XKa[Jl0 7Gz5!&k4};hibWHDG -2V) 2$*nX)Q`H5>b.VG!GV퀧籃!sC+M>dD0PтM2osH@v pȤdN^:-AYg/:.xg8~e!6-jsLJF\bҊN[kz+&X6C!C (RԶe 8(-6@(p3KЁj28s#FpWCɌ&\ɐ31Hm9.Zw0oݺt5_52 = VZX7eQ#k]IuygR5/WQѲeL\\t5UqhzA(Ӗ8PSE,ZAxRAiןzPͫ]٘=rX0"bGA.2XG{?{~H#YË_'= |5umWr/؛f?I\?{d.Hs羄p_Jg]NR?C&8ny]4]a:qA/Hrs~nI8T,' nɪ@\&Ȗ7 z܎ NMd-88=hǣ9WvaMbi[=GxW[nm>~5BG|y!)!yV;?ևn3H*g+ZbB^ve ̲s&|d܌ߌU$^7FHML`4[Q.L}gPkۃ6d9&Z@RU4DL)8 +sne(pxMWj#l?*Mm(}3tk=?_K>F|<OF;K/mڅ]sV $فt* H6U><W 14:N8= -:.22i*G+Kp5rvKF t2瞤kVR;\=6Ekcn+a,Key9OA/|tƬ1μQ5$M MJ"5Ҁk2,  >ߧA)RAy/Ztҥ!H34( OĹ L DpZ¥PZe"_.B>ǥDL"'3aE Еh5rhPnUه'y ogfGJ AHV0IeTxvb BqK8aJU&Jf0$ET ]["HyDHHLyR l,-x6m],i9qáUǞR6O?aͪdqve}"N'^64L;R6[vOKѪZyGkrV $2 2[`i\vL&mJshSgLUͨ)^JeTRBݤlLk9WgRR5c5rvkzX.ԅttޏ,ί 2I.!agfq5< Ϳq-PɮT!{2d'| %$!20#EΥwu;Y%MC1|& (jSh Z&.L.$b3V;Li<];Dk߷ k2T#xΥ$;8ǜU*^a"wB-`6>$ !dEG !,RVh|p5rv֨_,P4b5F5"4q|ddH9{̊]r\t^iK H1%M!95sgb/%ՋPY/ 8jaw~,uC:JI}XdJB\kx^܇^0cNl(IpCe?r-U~l..fJM}h;ɍlB2Q+=Ԩ8|Gvpa;fnLc0kbQ e@AK.\ⶄ yH\EE3 T.$"ʬka"еȹKV/Wm}nۏͥxg۪~m I x*b@x{;\e,{FtHa&Xd.R`~`}?[)[Eiс~pVjՈ-պءU*puK=ւGy7ou;589s#h-Rv@E,XIsL6͚䑳sm;. yhL nuZ&Ëc@7i۠n .{“''-OD<י>+,{% ۫)~\~?gnXB/RpNrRK)mKK"gsm[0䤃OA Y郲=G҃v & ]zmXtI^M0m :}~icOW~nc,jyuԅ>;x)zO?0Ӷ(+][WMVqzzt:yKzy:Z|_zaAEK 0=tP 20 ch,S 6d/|Ƚۣ[B0{({${`+tNCNVt&B2F9YH&K0Vhgl% >Cʂ2{D&N`]IвކO3C2H7TU#Ύ6=@W\ %ͅfWvoqywfX#nPnaYwʭ2*Ea4 xq)UƄeC<:E)K ^DǔΓ2u,)dmTѤm?\iW}AXIXhAqg W[x)HjiЌ;%fmނ[3CҪ=.^AgK\D-씏'79J嚳J(dQY rk ɠ0!mep+=|Z,+e:$wEpI&$d,qsֈyCI r7ЄlÛM=wT1=}z,כ[b֓DPf̸h(M FV]J}F"t| 0|NdJ3o fշ=ƫ^Crrzpk{v@Z  iZ㍟6o f%,rH |9CSi[_BJW &pdĐ1DgS£)m.!q1rduduKLr#1%i0S1Llj#@hgF&p%1EK#TV#gwt㷛1]huuv"ri~'7_}r3x{\nJ*1"22K*LIf앎"!F[Exx֕cATv7P-fo,!nǿwYS; Rl0,# 1RNot?й$ڃ9t/Zw:r&1@KP p9(Bs]|5O2ntkZ X% U -G&[I&*?@·doNwL$v ;alkW:9zq݂LBm:=<{=ϦqAóst,qٜ>)V}|lf0h%ZȬxS2 3xUbCAu:G?G -eQ(Ţ(f3s`֣ivG%X.wpYY3ZDEt]d$%K*Z"}1V/ͱi5]!BWW. Y.8c]@ۃۗ Pݾ&e=Z|f|~[x%{7TWqȭJhJfeWm|Ac.[oՆnMwi:dzn_^Hw6t;݇6|sD0ڝa:loyu{;߅x.1 G ^+\ֻ$tcsJp%k40p 9,O"8=Jr$Ke݄Hl]$qiP-c"K=fq(~2X%dZ0ќŤp\^6j|ZmqϲN`MjoA1IJ њ *|k_OFoݟc,-8?~ˏWߚH.f Hqy藿q| 8A8%?|/.SL~ SϬ=Gܒ!P2 8dfGgF E<^G9)#t_i$qU.p9XLTڮMh.iϹ~L|44CZ+0z[%.Zw⣦X;Z rvt=r9Y)}X70Hrtwۋkۺ]y^|u`[s_y7sd4z=o=#[M%ʶd'I$6?1z,/Hh'~*Ez23/.^ΰާK=]uքdV`Ei,C_p4ɣ6M:o|2jˠx`(^>'vRG?<:ǏHxH~/xH#cA, ȃT5Y2ޑ1̗]KmI ,*K֐i ZG!8i<]]_vmkskUoQov^F}ݛ. ;[dՐsSn=mdV,Z+#9rD9`JجDGuLᐢU>Υ!EA&fLEFEңrmZt*;8T?ovJIʄ>YF$@ NɁ m9\.9#Ϟ֗q^`m/p{3RP,2Cx$  +04#S sfQXe,5˭1<1(Y룶*uFh h1K3zýbBvXv)42N9q e۴a" &ȤAdelS2Vk$sw/YkpvEa{&*JCĨPdĬ1c sAKf4I' i^_<}d( ЛS%uRH rHZ4Yh" ZZ|?'"J,3q*Z6#-d3BRZIQ wR8( qteYksⅼ,k xeZMOgl,QbAf@єRO)^0$iڶ#"Yz4_kSƷhnFvP*P!bAzά.,K!UFΝ35xSvdBygk1.ԉm͐Zo:)AKUzLsu':OYuz ((i'!H&Ĩ**e}DRUnq* SJV["CqIC~A^y":)Ti( -i>'c,tym=|"@k׏4KKA Jz;aQwue#?֍WX hчsu=W'-^/JA-Ηw7F4WPKӻq)i^!)ySZ[_@:ZS\y2̇ugf`x(dr(QXNbqڔs:i21I1G'ctsAE\; ٢CGz{`υ iZ.Ig;2 lQQXN"Ȅ)}{LVN:2B@D id0cL;zfY Qk6)Y!eɭe J:sZ8R5=cÖQ#'oBm}Fs`. 8Nseұ\Ĵ( eJ8U!؊vj;Zks7] ڢ8NOR %e5)c%!eSP2oic7ER_GsSZ&uߠ皚 6&r]U;&G/?=MzwT;jD\3DAsܷѧXAohպ/4]g8d6;=Ihi_UXqyd~h%C:*tezs`X7tEp-}VȮUA)@W'HW$3GtE9U{ ZaNWtJbjNU 7tUõ}Vu,(N>#* ]p{CW-UrS+V]V \T}: JBHWZ'u*]`j/p ]yuUPoOlQW.ѕ)IxtUP 1 ҕU*#돺*p5 *h:]0ĮNgg0-3o:3Rmo7y}5>o}4w_{]S3 g9ϣ$541OӪI]~Q+U PX}13Xj~-c Whv|(a7|YRmfɃOs_D[YuJ͸o}32tUӭ|[o"-ʪF_^-K(msI/[S|ͻW*'*qW,74Fae=T/IT Y\*X+J(n+A~A GVz 3Bs%1snJ>bdwYEo A1MB-* p}QDݣ۟-]. 5H(\Z;=cZycGB{,6Rm{ЕC5!{DWCtE(%3] ] AU+ ]0J):AZ'uE,ptc%tut+=E ]Z;OWeגzR&7tUިt>vUPv@WBWh0J*p7tUΫ tute =+DW&] ]Y+퓺"J[tUj*hmdC ;ܾLgI=ZiAmuZ?dK=wًs_=O*z ]whS~oyY{{W]|K滦_MI㫏ILv?~\{K϶9p'6z4()HSFkt!OcȦs q$52Mݒ:j.̥^xr}~u\6yRFӲi2fϴTw>nr}ݻ~~16L~L?OZmi8㶴i Uh{ώhe{m3mMYǷ\n^n}?&_5׳SipĽ緈eYC 0 JFFTi|gu9l'Vl,vr|e ^=k Ik9 |01x>f_<71 ^;Baڽ70KW74ut&$~<Wt z8?uNum,kb;(#)-cACi5V9XtRg]/Ct-RX*d Uw٠訹{ARIR@o,(pQ%jfLRP*I&[ͼ-…Vtnx:{seQF!NWVʦ3"wuc|mߏOkf-T-¶G>\|b\cay#>῅ٸ0:={A BRZA[:z(";  ^FA@ʒ[2 tp 2q:6zGsG-y*ZjMx+/q}'qv.>g{ey%Gli ۮoΈhzB~"]_}\\]/J-h&\k"&D2 @|6Vjz:!' KMOgb*Z|}YB|gjN&Xuυ7W_;)pXOQ+@X0ѳ3ƘҦ}r^`ŅC+r|ʠCORXT1XXYeLYNCy%@JPq(}pCdIyS`xLKҤI:r98%RѮ5)2-ǔHrcp#H\d#eeҊҡx[ѡ;M݁ב>?-ǓG!{U'M2yF#B^Rqf0RV 53(j5Y@ƐTx[$IxHHLW"Z6"$-/.gծ_|kl饶 ~%\)k% x1g#!TM5t(^셌 IjSFe>¾8_P11ޔDu5v3rq:w\S͸DjAkvǓ&\c:H5$u (FYRxtS}X!Eb=$J ABh)"( \tN5zO9ީl*SшP(kD1hA#B٪ fAWO>h#S͚ -es` $EgjD}cҊE+$Ꙙcr a%ؓf_S+Tekf<yM#ySu6C"6֋8A/@F(MQ&0at0?Ekɚ QWc1zzTa38TP6獶5?_uo#~_rdFz|< n~|Goh؛69cݤ" 1$GAw)aUMfX(F͐x|ǻQww'(N#߱kЁrIHŀ.FCK%I4JgV|pQYpMǬ _5nEzO)]5fꏇW䝆O 8uuزSHS:.:CR`q`c<>τ[RtpA4zp uyʊGUѴjĞpj+\]Ҏ+F"mvcgba T٣12u2R тDZTkr >ޮIpv>q:E#< #AzP|ׅnYS]IK؝CG )v4YBVe+sL/N^3'y4%%;Cza3@F+_Ȑ&YBM=I-2# IjZ"d,/5/5joEdϺqȼ9O6"SѵR2fhy5wձZ]~((6)KGD6GJ՝-\u>]H.Avٹͱ54ҿn~a9%a7ZxB,,Blm5Z2 ]4 JBCyFX_j0QhN `D,j촐LDB,3YSX}4Z% Bm¿*0=kY>RcpRw 50/<3)0lr5PQ =a^%ƒmiQ I8U A[vwd3I v \qDIB𲱜5#ro$fBW}ʗO%b)wB}^ rZ kL%%91t$Y %nDZA KAĠC bBz-[X9!X) j' ~c2Zk&gl63F4)j#FkV2pD/4T@wh8洸y{2%[ĩmf}~?L8VbČk[ Vif)@E%HLP9H[v3#]/d7cW~< ~d'L2 p6h:jrYFP@6e1G;؀ӌCUQ7jXS]tn]՗NHM)Ġ8BKYw>A]TS4H6hRt>8)ckhVNkZlɹ9A3Gmf-㓟}1I{46-Ḥ6 Q.s0ڃ֙Jd˸7+4mP!HJ0΃'AɸZ!b-w.˜T*d0D[nlEӏoN&pra ClɢbU{Ya.>;『 YV {)[?~[lh%Yv # Cg<=/8 ); 9@E2\ir$ d"*ҤYdC/+Q[%0`Vm?5#qp:qVO~ hn;u{'I6ʹ6_hcύifuZZ+OZf;YgY3ZLUЗMRBU9S<o]ġY=M4H3;H >sry&9`ԯ\纘?/eur,*vrJi: лJh /_X.C^?6)c/t=4;:M|j}OIǥv].Ө<ˎ)EOɔջ"%օ9W{=z?ekjn%pB׶Ij}j)_"oR|mFl zݰvu1nQh9} /j}ulY&OIDjocnnaG7 Ց`<p_lcU]_W?(M(ÚB|NVwQZidRq<ҞZE4S| ZǢL,I @l /<0cNJkRiZ@dtA.:rTBƜ^ȎeY%/#.fZv$;(zp>&M@HcA:dP$ R*yhH>+,k)EY9Rbq(jbқW^ buj(%y߉ͷ;C ,- .Ny?ud~oh`A~$q^17㫼~HP~Mr41yMRg±#P2;xl`x}FW[][ .\U"$mh;J!7uW.DE%_jLyx~SomH?m/׿=yqq~{`gW|q5?KOŷ\ᣙ3fmзobFa;Yf?¨lѽgZ-ڇmoW|sm!h7\_Ǔr5|\[oNpd4Nr[RY+T}_̭wRr睔~]ݷ1zm2,&^?tr~{Nj}ga]uYqɼi-#ƚߦThZFUm31.?MI{&5l] 8H-)9v^y$RM[lfuWWU# .`8 ?ǟ?~uo?PfNɏO)pOSWo׼>iRiXX^cFtnچ%6@)9d8;EA iA x/Qɾ#kUU^^rRXQabAuAZ\/ՏK1Y9;،V jfAO3&Is":hd",()ӑHϽ0'&! 5Jƨ8uP( f`F^aШ6,Fa JECl̙N!76&^H4mNv^Fe`7ُ(J4 pBmr Xu;&_ㆬxvK *Q hehSNV<ݰY o+a<&XOY{3DI`aLJ'C@ hTcy8Bf22Q 2eV1л3x 8mXk #5LnG.'TTr"$) NQyܑ`&0ŁHAj4k XA! *1}ka-kĽT{kOJ3J,϶o\oH6\ArRbdik+H A]1cRɫD0J ڕIiQ.P!켠&blTF'( Dsͼ E@ KCd 4@szb,Rda18}3p=.1h̤s&B@V )$yr=4<(0qz?wǛ=3Gw@ 1w3>*c6h`fbFᄳLepYجojk&}5Ykc&PԴf7;?dCR$.tE84,2f=Y-p^H%9 c\jUdR=oAiLpOP_鍨p<ֲ K?AɏW1a/S<.=~e_ }azS.+zKA'3(~L-1}Bjx~RQ% eS FSQG^a&*4dXfQ(B*t:|v~`nQE[]뿓Do<,~SW7+}Єk0Oadee5_\`a%8\UmԮ!].F4j9ž`O{Y"!u70_[`iq(Xjj*P"(&8>G'81ɳ5OF08o(\}Ւ߶CXƽq .7L'Ts\ CqŅ|-}{tex۲d?`?4M?ܢ"Uo]e:wC˔W2Lw OLA;1Z\5ɽQz׃,t(6>evW >lp~]10ĸGuMեP[(>0(7y'3 3}Esս~aN*+=hb'ߩvnʙo_xf㲇35f 3ΊZ)eƍ h;HʂBN$=M~ǥi,˳j蘬 vǵvD`J'(()ƒg!3B&MxkdCcnK{kp|Z*uccc֫MMķ~ura빪~FO=m>gtX^ (Zg+%D(;#+ϩ[`mH4l・w--Y%mV-#孰4=Ld& (tЈ"NICS0$GH WE=7s绾XuگψvNGPUdL]fcS QTP*ɽ{s3.۾Npottܮ`\O_ou=Z|G(5jDdR8K<+I6/lޡ>V+*Vj6/z& S$YI d Tav@#W$7I5WI8I ʃH:Knԛ,ܦVފק7Gk^߲oL\+\+Qhn/GDj}jl̓fRO*ϑTU o_ab+ǫ M1Kߦ˓+v1_^Mkih-yɧ|)K2[*_K8Ϫu-":Gq,Rawo_oA^`8 Y bFя?/ T# A_,(2?Пi[xE=*y<բ|K=q)'R,_Fyʼ. ..Sm?T-`=^<٦W֫| 0)t2,NHu  yEK,_)4`HJ(!Db4LI?LwrF-i,RpK?Qc29nyIW ҏ&F2'ae""[zmU}d4|2{^x)HTeICVKZ S6sK*DIFra\﭅  ߭ n-lVxCFjko-[ t= ޷fp!]+DKy Q2J!bb;CWu-Wm+D)IOWHWNt KB3 5t(J0 ]q++iW ѪUu"]"]I&i+B[툲+ V;V3tp ]!ZKNWJҕ.m`&Dg *Bƴ%'+%]+lcwpŞmvO2xteA.]U'SPsL鱰G*I8\qڨZ.1 `۝# M#Zz{3D94}d#"%{!+fh͞j3]A.9݀hOWv=(!t.]+D+[OWR{v6ݡ+kyW jbNW!Fq!BwfhE+D)eOWHWBV+Mµٻd+)9#]+Mg Bt(9Q=] ]))2 ]\C;wh9m;]!J{:@0> ]!`՝+;CWֲ߻:D2 :DWX "='l=]!JMz:@\-Kª-cbJWyUYoPoAWR0&f1;0y k@-p_ߪ]<\&CՕ:XFy9I6err {m֝YY:kGqt鐏¥9gF\}kQl-Gv=;"J!fj=tmy3+᭠+]z aCt5 ]!\%=]"]C];tpeithn=]8gĈ]}ʫabbVu`8Ͷ,Ug)2_?׃;>tڛŀ*#?f7'ۥVIg]:k .e%m9_ 6qrz9XYS*%$]>j(( ]*w/7xrc^E޻ք^>⎩x(\ tAŬ6/_7@KUSIm/[Iik-HM33s%Kc{c^N?b9/ӗqZ*m[ӛM_5`c>iNrlIs$c 45Y:ˑQQ6݅DP2;_۞_/kH: ?u E{wEi}ga`6>tOT]B٫5`g-%e Z1Uut=C>2ShF՘=!1*UeV[rT/VUuk7 &FOMj*zԬ*"n 5J66Ԕ1bM" =,3k mq1kZ,\T rz7{G2Y?=WgS[2U61k(WmamM";Nr*0!?rih*kR=^bØ- ӏF!(>:d!ؿ#QWUgt>u* b Fڋ#Y1'Bk'97'!XUj5ԺsDAu@r=RNZQ ޾cQڷbK$]DȝDNE[RHX((E?!Zc囄 E)>9* X %$ZjC_22$1ڴCԨ*Oz"GfֺXL)&>*!kՕkʧMpS7BCJ}Y\` 0kdIQ(!Q!S"kCu:v4&G)/VL ]d0`g,a{PQAQxVWɸ:3mZq9*gM%ʉs*j]vuo@2Aȳ±&6VW&b!%l0BX Ͳzֽ"^6-FW 8: VY4o:TZ{jPѺ J(Qkj o2 8K)'[FtTb0VjXq21ِMh(pu M8'Xh\)s (MQn2W%Ce:|=Dzg d,̼޴X =2UFmTzCZ2ȸAMACX' @44'X!AYP6{iLsw̶hU(H(]g9kƒ5 q.0 kG !.A 7)ؗGC'Ā::%@ i/ `NBѬ A{s m*™RP(q ,seԸIj g,+Rl mjA!O9!zZHR 5n5=() E>R]{F3XyxWCB"99oʤDHPF 6A"r C$}W(:мGw]ZIB}fpGҠ\=kQKUU;f#cEͫ2!pB6}fC|`u9zlkӧķ7*ՂUew mCka%a ƛAy@y*}r@ֱ+=$]I UF2rx蘊'c35O -tF^8hZ R|" LԴʨ`}CfT8 =h^= KH`GN1֔xl }CEWgUS U_"rΪڃ d-їHܩi:B?&_n3|mź d}ȅBOe,3VbM=A%dF$rc !/fs7DRD :&8Blk ) `*2"=Y%2v6%)5cEZA "+P(vm!Y5M(3{t$-٢Ac:ZA"cS:fWci63+I@Ĥٕ ?AjDлqY]õf5BeV]۷%VBkB`iY MGFYICxShm@\+w[U #o$XHȨ * @t_L04WSQ&z̃qwoq @} 2tsn9׍$a n=B!l$G`3)-zCqTeQۆǵ"^D(Y=yh4D&fcdgrrlYeFIJawâDy آsE5G?Tڍڪ{(-hof]r%5n *TA3Z`hHȺČ*mAz|BA3rw]߼Uo4v+P!>^mw`E]("6J1N>iH&9%M bZ0z)+ZLji87F3 w0t\:'X~ xCڨKecT-x4NSri6@ZXV=H՞K 5)1fj)ƅ'еtd!yB]ǖ?jl5^ AT{HE D* g]`*a2¦f@ 2q]HfhJ7a#d.iS#j4NrCɵ!Տ@n5U Pq!X7kΦr1y] \%TǬn.w?ڎWڝmO㳓gG775Ó0c|iOc܏|ڟa#9#BW6Sat׌N 0 Zh иur(=}@Fo-> H|@$> H|@$> H|@$> H|@$> H|@$> H|@[ԧ-0f9> }h:xt=d#znH|@$> H|@$> H|@$> H|@$> H|@$> H|@/\ZA|@G{W^|@$> H|@$> H|@$> H|@$> H|@$> H|@$>cE}P' c}@@it>$> H|@$> H|@$> H|@$> H|@$> H|@$> 1?z뿮^=GMǨVWWesy{(/Ƕ4Gࢫ]mieu趥z-mgS exI.KV^ ] "Z@ҕU~:|`bj5q)t5:>t(텮hAt5/H] n1t5І,ہ2F#q/ՀR h:D]#]bԂ*#QW@[k@E]#]q,بՀZ>Ҋ:J2̝wXoƺEש+ΙϘ\t{}DkNY?.6_&9vM9_;3S@>g6_9nZQ}w~` ND,6Oi#4ܩ'-#)CW.ť@ա@՞jvˡRj (BWGHWF:O ΛEWnXR{#+6D.,XrjՋ@Hpt>b pZL38pt5P:BrE]{AZ ] Q/6o&tY p[EtZC 1U`MqI [mCWZ ]m (]%]Ec=3!O2J'iٮke_`Z:v9k7C 9"ap ,#ؤ`2n¢"|87i; ޿45xO_WGPA ˽՚W8hA~:&m\ӿ!;h"wWHc*Sơƕ̱\ߍ0C;|a[*m'3#/͇}>TOv lЍk= Z='T\녺ԲOxk:F:Hi=I𬓌2uk`G9ņC ocXxIqǯHTHl(czfl $ TR$2$9>QJ*} fi '}Eh#fdեe=_C¬QUg2s8mph@}_g;Tx8nb;T|[i%Zdp6d}3.ggQ s4|R~*.kt@'# !d)/GXUroPN^5wt,j1w#9Å*=pݫFoì5?Ew3^-oR (gW+XD95+3&%RPuE/\YΞx>H3 tC{ bnPWx>Y}O'dvlrb56a,gA]S ~D2i>aT AϮ4#48 a`g.K͇÷jf'UHMSG\],Mr\XW BdqϨ Q; N)o 5(,M \jN6G "q20eUVnleG1v0߽[C@+lWW[-R`: 5:zq*ѡLYUrYW*s$$&B.)2>>Rf2ZC>(tۯqK2uϙzo97~ioՄ,Ji0Gխ i,i[AT|e}K*Y\yT:q 0RPQqHAyPN Rr=@mbAƩL@h5hk1 4gX@eESg}UԦsp} nLZ>0%Aծۚ{ߘR/禆.eI (1 y|EriHVzQ{Ke)IkL.dX'al˵w*m B.wȨ%tZڸh!JP<1C*& rM _)F6N&,v@9<h F. `$ 7gsD%o79mέL;ڷ#³E}5<;:N˯xenHGA*D\aiRʗ`Xo'l&j_m)>QOF?\9X>xopyXDZ8:f7& Lwjk?iywwN"(,ͯ%\PEQ K1W6Z7!Y򨨨?&)$icZ׭(xRǝE|8~Q;ɼU]9!edѢ:{ oZ tQ A.P҅9l)OJ/d9ʙd$@2Oe$d|Dw[@IG|{9"Ҿ 5~VoAY.-3ɥϰs?QIʹ^}&gUf1˽UNo4-2\(+3hJ+-fJRt85 W&ƒTyŷ)1CS'AU(J!$r UE>֩B2N[,6!I5͌όiƸ18 nBqWg󬌷.NV5۳-K? pPξrf nЯ.YMRḥCĩ'b5j8夎ܚfeYX ٳLL*/<(jJmc"^eƣb;uZld QD: N14ahbys'90fJLB*'Qa*,T!/iՒQxD޴Q먬m+~9S-ؘ}ˈ0#BLj#.Hbۘ6t$oyN,8Ѓ ,@lBGI%!@ZUG%-GO}MfpO 3bcp3⧛U6u6f%"miNj/\?%SOY7ty)A 97'pSbkV5e!kq|x ۳b;x`oFQ@>#~ - i::܍Bє & ٵf!2X"F6jꩲNoVE!sabRځV" B*:_3\pŲ-^Y72[z+mEsPdRrȮ$&gDͩC\vJ.P]ţoj0iEQ$剴Ɨ, *1TRr%E~ttN&oCpXwT BH%[DW,JK]v.ds,Үp3 Bmz>¿koږn'+e|170Gֱ\VYd+>C[{Dlʠ֜iHbi0 D}S@-!`M$?}l%cm|co zWz  Z1I9hk&$}L,Ӆ1,*;BwVXPTvXRo4Ztv[-it4V >xtUcmB"F͓>Uߵxq܁9*U| 6V;UtۤTQ1iNAf\1z\8glwź_ucA=k]A>u2f= xWJi+@* erC‹ߗ//~O^ٛSqDǓ/ٔD/K4r7*'trRfb(l%+VZ;u}-y7Gv/FA ^F"OΦ+RO|n07L;:m>e}dhjO(bÈ4>EvF7zVz.iR1圔I` OBYT$?Xg ǾjT^Gi3A.~i?~i=߬t Rzvp 1DzڠAĄq0[96T"7jls4y<@לh2tzxV FXuuG "Xz.uMQli >L}V۶ =ڈ+b N3 JDH\k;EB5!<29}ax,;]ܻ' "|eE)7`~=tOpŷ2[^'u) 29#"!uun{+9y ;b ARyI+i\ ɨBUMKo,kq3drZ=YXzO1/bW #yS% *]E1M,D 盄 F dP?)P{趨P?6:tR !Ș]1&"YO `$F10J@ ֥"eXitÜnMv,˥'36:_Uey ]FDH`W&V`B),KhP6_09bM::1ԸbWXʺ295jUZ-IVlQ'ﬔ% X4pnq_+rSzGc' m?Mc JO :v/#i5 5? jZCY@'+"ۑm&cr0:,HuO(X/r=!u "@fXDT ZߺB2 I54Cz_ʫ?FѿVqrTcc{Q~ov;C ,5{nfoߌ8J"SAxe=(yC |;Rc>zwcGdH!9;QJf3:Gwfw7kB檒|! A1&ju\ƭ͟y~ۯDJj> ( }lS$J]ky{GMvVλinu|fv>_)~v⠶Cèn;-o5 Q:Wb~Eqt8.Gs^ƓOgm+f%E$׎s$G:1s6x':ՊOG=;sT׏|ɮQޕ_炐jp2R>LS}ٌF2jcWd_<ԡ框yGOxo珿7~wlq3pJ&uI. b迾c5kŤOggRpImFU;ո<Q^~)g;]<]D_³ƵaLk`[M1_=t[m݀H7>h7^ >+15.G7vH~{^B Wg? Sh=Y% s,X %, >Cʥim 8lyQL[(; o4/^]q=6'aO=lKxb`npdӝQqkX5M) /-#Pd)H"t$!]B *Z F"VoP+ ~'=r>&&Q4̦eLZidh-JePnP(,'fY7oc5<[h,6D¢fN`$++ Bh% &q}7#"L[Mz-l(|,ބot!JY:=Mevjķ4-琀vLHM' m5u]A]lH}j~wG%we&\(t 4`j#0K% F3^oME{0jD@̣6ECF{u[C42F #s3IsMԸZm =7.Zюz+_Ep?6jhtH'ճA{*}P+RyE{s.ِݵ6{Z$6N(&*aa':=/Gs &RHPLAȑ$Ymq%)2KD$՗2)gTD F^2V+H9)(eHг/97r*7_sjoM>[/g|/o>~hl(=iRd҂D*Uv(cE/AS<o/»p6Ie6m#+-7l< 1P쥠|;c)*/kdz<#+LI>XhdىA³ul i߹3I N:!$ %EAeA%Ւb)eM*CK.NzQe= ԲEe(1xqFNsËcX C(v׊64C ?D[vgB[fB1bw5?b?i|5Y z K !MĚ$43[n7# .-X.xa`B=9K3:{_X=O=Fi֌XjV"Y$mܧ|TA+jxoRCEƦd>ʴC)tnEXGLOlYSSp`LؘK]Ik mwfٚ|P=`j'n3Y(p2G;7&zz%c8%?fQꙅ $ `Ԭ՜?`ˆӜƦŕ^Lqŧgp :P7tv;yDSqMhXEzC's;.fMҧE>zŵ,B6zE.<rS"$m2fm353_ݚ~,n!zl-醟n_nO=O=Tn(!Ҿ:G[,*Rr& D'EpUR)AD7ix C _N˨u#)tUgxXRjEtaG 钼r~[,pV26Da p4Af¿Xx_f> U':T>Zz(ݔ1BJQy@S5@ @1![$[1(Ȣ@FZ:y"ڳm@r@"=ԇ(h݇O}|*c,YY}Z(ݐϢ*8Fګd?b~ۣhy@@rC.&LQ@ga= !@: <R=X趕&i#% " )"qPN'VĆX6u茜 mk:3C ප]ol+:{J%+!B AcmH9*^@ ((XzDwmgSto> 8\v]ѱ"w3Vnf^^V5Ogjvazށvp>>[(/#k$U)\~&vSˋI;Qh}0qׄ*t 2=xޘ?Q^ۂ|2>;m7鯚)4vH5>~/44#>q,Ԗ4](Rsq~vRJS͋8fߤe{KK,ݞy ܘbٱ!$J[4TgL> YPxjEZCPo߿zJ&hP\SPa`8le rNY +H)CCTŗl$z;\lKFXo0cCR㌢&Z^evLm6ySt5s:fk )`]j/.xR7Fs/xٺ}◩\{}?^mIh?iϫ~A5W1 oCtQqCI(J!2)%!MZ6oAQboNT3n0Z">Z=4 y1cx~OWyt6R}{-g@_[:&o/#MD*~t׀ w9A2Q> &hRdU6R`g#>-"9"%f̣&&T@J%~s5tZE^Z+䣐AxQdȆsIM2EjBA:.v+Nmv5n-Ӌ/HG\,Q?vHEĞ&݄sCCcuM$RcHTdwQ< @*KVB{-Z{q"O%`F/=vw m~\һ}mrɇ΃`*pBaaE#mK#S|M!fa 4Ÿ5ؾ)QD2M j>IEgM"{\X SvWȲ\),QU-QO 4g'a~eHaq#^I=Xńݟ# c.@3q9%!,kbsS RBJL2~Jm>(+5՛:91rC6?H.Zm p-6 Oe;y9姷~x'?,^- <%qNL1Ó4]z7k(@Ԗt ,;5 (@spFK)d '&Okfo5o1W LcvAn6G2Ayt1-kاf`5mip[%>n۫OeQ/_^]j^LFO+ߜ>`܉|NgGK3jx\Zߧ~/^fg|ZbN᨜ OVڮG˪Gǯ{G`ݛH}t55Z}*eMy4nS._G' =Ysx:krFc'YI 繌+fIˠCq it?n jI1fu:p4xg|S{H=zGoO? \ȇ:ۚoC[Q'ƫxW߲n)&F,sU) ~(}?Ě;iۆkMV+QWr|^ nA,t  ܋l^{6KU! BڋĭLŝUkbgcVT 'ņO)% PhH_:0W.M+=ŀH`my_;SJIK- 3XR[j#h {JϜub;w:Lٖ6Xg'`(]絃yn˥γ|ms:EYsm5aOy+B&XFFʝ\T}oY+E`:2:QF!:B;l אRRjٟ%r}FގعBC(\Ӛ'RT\I#t+g6Y[εz7m=L"cҾBvyمlp39!LPk#>"~]`'H>_evct[52Eqo2=*b`I 1h ʀFD*Z2G`-J˭(2ۙ6_rv<$d!i"Y_lHkAb(,u*g%j9 K˃,,LcĢ,-6gcY,Z:#gC9۽K]`FwZl\RNl@֬)AȦc&*$L& tc/]_ zdfD/WgtVH *d[9!PP bTՙ/Nz?ݾ Q+2)/c32:%h l8^\@PL^qt8vn3Jכ&ǴP8.EŤb,ȦXD@!QYPړ]WG?Vjdɮ_g,H%u&|$ X:SιQք@ 9FpUIu/xMLca-tt!ѱtML}dх+ȗk %7OHs(7NtyZZ-Kـk~QJsɵ!KC&2bJhY>MJʪ-7l?Sǣhy))ƣt-[#- (E2%_QJ{.>"+Rvj'4ttCqMAz :%0YsBѱ9۩ң0~[f2Q_s?ʇkmvfbEwۻ v7 v<$cȒA{83eGGbٞuus}C\i2s܄@>x\BO!iʮtFSPE8(sFY3q8PS2 6B4;@78$ "qR$\Ε/ ӹ(IxaF#NE\ eoH1RW}1D9c"$R$1"L -Q WDhf+/cccf=ړ`hk!z3@4kI:8uαv;T=pRh/Pkx^ƕpVNz_%K ޤp>ЎùcGSe#yO3ߌd~KWnes\|8k\,W5'<[\gـ6wYOו^νƭk͹k.n}x:[Kuin2P%Y3ab4'k$0FYi:H' q {tY.cv\qؖBo>BWM"I)Bq3/za\Ugt x>lVw~ g[c9 N^g73 ~|O&?ӳQ%!ZN D7KCo~:wE`T/(ד Xˢ?i<EM"J_rXw菫пY>uߝ'txuw ,9<ȚZmދ[:6vWNS7]-ߑ[6]|)(޿QB7rsZo6/OB\Ҟ9lS 75FFƢg!&lZ+0-iޮ?h&a2m"cE4>_ճzQnճWx;Wxp(xt2p7޻&8߽x'q7wVJ;υ|7)\%S9Ƭ&&u)Y]g؀g-z]xɌ]ĥGC׭=Aq>m9 ^Y4pd{bqar> N..O?-#8S.&ЦWڦ )X|JD+CGnBt=y,I+FAJ͌O4(RS k 0|Q6*B'EHkFf)9\2TWyiKuCW#ƣBe@c!G;'U&z,+Q{%MJ8*Bkb0!tl泆S4s=1KF -#yz>G? -q&En\Wd g\hYVFj]Qc$FiO(&C(w<6@^DB,PFxR@p83ĥ,ͥ`cT21CG#_}罥BD Dy %ĊYLĹ%h)T_\ԗw<+ yU50Z 2zf3T:Iȴ&R0%/QmS,D ϒO!P£ %2PXQ]QiԌ4o\uXbgϦעMIWbmY99k"+Z@ɐ[ϧH(E5Xm5\+&tP>/ Ќa׶%"(+ u.j)}.v@s)CRgȾII8;Jp TiXL햱V)f ͌ma|--{EyFk׵'y]jޚAFhXK-rUWJ %%C|HH* 8KA gdye"ٲ'Ðɞ pGeɕ%A3tBnGML0V%6#@Kb8,QCfǁX]S0V ȩB DyF6$bYq.qz{kL'i$zΩl$v9/"C*r*o* (A(<|H2@R+N8aCda<Xl~jiaH{[M3>ŠP\]L]P,X'5C:! <*j(-G%)GYvFD<d5T [bl'GX^gY]d"bo7֘kOſ* ; o:p>r*HGxP!QkmŚ'إ:u+$vA<5vf}ONҎv#wts\u\lMRڱ+Bai⚋O0ٻwAtI<뵞5UsStAb:z(j YE+y~ g+tWE]qa} TlRh~d}C~;W8]J[ -TjUZi[LWPa-6v>L;%8piJFx0LJ0DŽ FX\ ϵ`աߏ$G-נLif{΄9 HuD Q`t&;΄YZAޙ0K)Yיv&Ԃj=.2=;ƧM.?"CFUufSe~ !;ɷGMt[vIv`jd Lܨk IBSsh!)HyPQ m̑;|D:B31RHqp-maLeh _1q6tv\i6ٽB2%9..u\Ña͗X(^r/ :(&y#ib4hlIPK0Ɛ 5FY#y#%ƐT%DJ !蠵^t?vo|nig5#dJ *FH${T6DCohtd=> xV'4Es!9,4歞3޺ 2> gqR"aI\ ,\[ҿ촯mI( ;&I,H%$(LcEm?e3}un@-e'd؏yB9e7|?n-Yf6'?'oRTz]qh2{79qD{5c.0l|q/]qeԇI ;d־'g!UI%r4uH5+4e,ؚ4CNgLp9Z1vS!';I)[zKAd)QeȍÍX`] thujdV6i&l*;ɞG]#*7mwVGG1D1wmnǎlPiZG6#P8.;G b@ "wʫՉsn4Xibp:cH9rP^|Qr=gfA&~mv׋?wŀcu1n}Q)2Cg-=+Y7:DO" &v`DqiHVzɣRY4.֐L.dC B :`2m DB.ry1xD@ \p`\M4,U'>b"+53x} ( 督QcE̗g~l.g6&'~[4}}9~sd^Au1vw5%Ő娟_ !'p^?^˸$W8yKu?SRr¨:'<)Kb}3,>yS';>CZ .½rTG ^Q} |=2{1nGKoaxoWGg< \|mM~ۨo|,}\ J|zo|:N~ؿ>9og߾uŖ߫өTR07ܳkR%׽o>- e9s6fPd6S&)nN߼CޢXN/ZU{jr{em_yS֭d{А Ypx{L>N2H1d2 snO/hn_/~8l0ycw /C'|Eʝl|}]偓ᘰ䝟1 I}/ۏk.O/Xbk);Z^d!Jk:wԮ[t3 >讈)HrD=B%IJ8cbTɕb* G#޹LZŞ_T /x]_&bHpJGg"zXtGxRl rHFIC;+zY}:&qz5z$5Ɗl_ຓtu9n@y\2yA^zjsҷ^}PwI]ҿJg5=Y'v(h,%]XsYJ-yj]jԷX)3å8;afp)V41HO~*X NOlO$v|ES Ka PeN А-4FI T4o#E,ɾ.zu:\km2HV Tr"L$-#DTP9I2$P-gNJ9թ]-0c]I<r'v]6b'Gg~ezvT 0)!$N6/3 P =I\[A761jzfFfg5΄9 cc.gz|tm%?֔! LNJ[*M>%fIٔ'ѧ-/N[VT&-[R;luXܝvɜLItM9T $0/Cʡ<$=4 0$#HAh&6F >8NϝV;)0 :` ;؋p-3&RFMa{bqx>K%+χOT x01 EIH 9|h0 $EWt5FY#KBcHAicq"ӉM^x\Ҳ)&ζ~U M6M|LPf:,T|B B+B `D@j1IeM1R7Y1Y͛YEOv3\H) w"xƣt[AG26N T$,9Kt^ Mi o0Դl#Hj2!wf ָg:as*cnRpN2eTZ8ΓTgBh"*R*pʍPSQw!x" 4~d\[wk}<1l(FG$4>Xdx 8Fp&qN'4x!T 0F+Ho,6QҤ׾wv5ʻ*zAobׄkp~r}ۋ~ӫ7àsdnUqOWxnoͱkľgmc$6W+$_k^3UivV] V{#{zܞ R$֕H?a%ara+DZo"EQRyL/02re;뗪w\ۑr]ҕH"']"x0H*$?ûo ]zos='S!ChHlOJ FWCA76uB{%vFZO(DONcb"*4}o#T:穡D)zGS  е%Y{ju/ 3a՝ꚜ8Zbe K R1:H*iބ(^{$>b⩆b&rOgKZg0HG%aێD?hPƒ2>{4F-5 rQ bʊ c1~1DlN<,.0gھ]_g6a'~7ҥN{uzA^Rk+X%V|"Rtfr)7ORB\=DpMX,#1AK)Ht$i{DW&UVLús߈=>:<TKNPJ@K˪N8ТRrFlL5*qhBG43j@E\ V_;诪BE(|]ڣP+ܭӨHY:içm9H=LN{uѧUdž 2o[Y~uUho%P|@T!+f2gba!h1lD]ԍ@?2r֖s/B@Rbƚ(y lHúseUa@[Hf--|T[xޣl'o(Ӳp;Ӳn< 6ZLMy-qm؅(!UDXIѤ.g6ad/U4Vn6-”X}SngBlHڙȭH3Ň{XwpQ/QmäcK6j6 D9XXK=k9FbGl k!ZrCasSApzabEV1Q0(Zʨ6JH6WMQR->a֝ԟMq[,0xEf8[[!K+ >Isc[zI}SWmI1@TBÞ#Uv[3HXP2i6yӔ ús|E:]\NʵMaRP"vq&>CqO}C){\j2(+$4`rcmäaLyTZ~Zj,sE{O{iяDw{KD)iGhܔbL)W_&245<;>Ne/V;w-wwl}ذuHJH3K6DD b)\[,B[)d\DnѺb)|5%|9՝{T]m j|.3vy~ʷdx kvL&_|pz= *Y bf4x58uoUѺR_sP[i<&2`B q*dբp>jj>A(msb̦( (HJ|(zSJU=s-¯SVXnwô\c:EwUJErcD ^`u֫&kG^&kd]ߘɲl% VkAς;T|bLʳTDyiOwjVt㺸"]YL\8feDjN ^X jlM89XSg<$?ז޶ҶJtT)bN`|,ހٓ#5"9j8s X<#AEV*1"X CiZ)9:5Xs`-Va#1Hޜfr*?u6eu!;,=a~wd"ÿvkѕzp˞d7r>-OsW-. HeZ9A v~]{?7;ײgwϲZtA0gkiX ُggtJU+1C*HNҏ:_rswq8nWޮwowߜSt~YyvO;ʈXM۟: |y>}kl 1/ʽ& m.fH~>u}>z{!7;Wݵ\TXNߟ203 w h{ޯƋY.~<9^7w>3%["wsKͰeayJ[eqO:}*~t6ˋ6o{8o6VrSU璒,_t7T|zxTOsv؅sv{nrSVwއ*RsWwow߿~ ޾Wo_88ҫlB_܅|I%c;Y{ㆦox&+WׁլZfǓ_[Կ-K?,Hk Sސׅb\e`RkoSNoi@5 : gb9hGũܣ#yCޮe堮LH7}bynG6KS^W}߁=1}WŭZ94e*6dJfW`3zjc) YsiW]4xֳ]L@8J\9B!P٨DbN}sS !{Θ04b2-Y9PAs{P5[;R@& skuٰS϶kgSp٪+Zj.ZղViWBG!cx^0f5eV*[SC};ol鰀T `ɥ$PY\P b?-ZUP87d ,7CaP%,7=, a6y`k!&grM{Ʌ "erqb}rBԦBt5HSfQmL7<|gNM_dY1"PTF}^]ƈ5ƹ@4*<6@NɆPBt D1YՄIЄjk2*rȣ=]3H[|p+8es&B@rgGotZyk <a7n-qFGOѣ+{ǣ~4ד )Vp$Nb z}%idT(8K)I.1`Y$ ͩ%, & 3xjK䥙 M0v[tEv S3 B28Z,D Gús78Rxt]^T*m~\`ùqtm_,߆@iMQ:V !VWmSHH`lq!f=w򞢷oМ>͊4l3||-s^a97ۙ<aCu RF #kz ɉ$( 摎T-?RydkTKm9R'w\0c+Y\(!9I~[)삵KHa0D lEЏL"}*S)=f|0>֝(yUur|z> +Vp}q _ŷJo-{D7X41;.c4>Yڢ=]t.=gZ{ȕ_56߬2`{Ecs4+iA{hH#QVzvͪSdj0 ]$,'R(q>]cلU&o1F'ojhlb M Y8mS(躜GIdDfx %q<9;,@A:54ԛyM-=|(}J'wdSv$` !j|BܰM$e؊1\J QhJdYمο$fCrnyQٽ.;PEȮ[8( b߷VܲC=WSNGEz`mSs[xӥ@6ߐǶ}nf~Mw4gGf Н7廛w7^P:hya2?Quw}G ֻ;OtoN'׏Ts*nYZ=}Q%SKJ0JۙV,yvm%/mVwxOE؅syE͋yr˛o铿fzC? &_?od~%zZΠ'._VJv1`.7Jgxіg< 닉~i1d.q+"f7e6R$8XT)Ĉ1!g29 XTfB /1JBPduV)K>&e(0 4)둈"h +oTgOS82xKPMQR]Ol˹b.TIs cpQR'NF"Wއ`Tbj:qmnwmaw~;]qPWwABkB{^Rw m^~^oy =h@LџodgR;Uނ @\ty8{?شP׃߷*ֻIM.kIvi9jD).e5B R52*'1ȦsZeA!FKD2ǭRZ)~Oi7X%z8D\La|{ԧq}y4Ԧx9^r}Üۆ~>6>ƟG[7Vܪqre&?ˢZ"-.,<$I[vlsޓsҽgNWmDe-ErwJ1-+^K2Wm3|d$+$+Oz“Q}"M9ӻWS*^9 y`G@qdz*A "A &$:Doln skm]g^&۞iiY\ߪ~Lkkj S']?RśPrYVU;i=y5-SiODw+E*џ#]JJ3MJfx-McԉjȎמ!ZjPjT{B/<aMvx"PA@N2F!h'Yc LKj,/4s ?-t~6{} qb휷L]xE∻y&d 8}490/ԪTcJ;J"eۑLC KR9Hٓ i 'ǭB#U.^Kgv<gy[mյg]syį=˚%YKVR弨s]7a4\DLO]#╨&!hbQV`چ,`)1{y:Ju͡T;Aw(-' cH+sҞC7"r.&d HxKE]:jSr)01%b\ȕJ\d#eeҡza*jP>  OTQq0!{U'm2yF+B^(iiii%Բ&, sC1$A"QCRĖPx}I*j5qvj*( 4ip&ekWVӓ/#ר:zh6nnoW\̝tr k6@6.U{sc@@Qy[5Bh|f^/Dm#SX YIa"c޳r, IF1}GY9F:εr@62Vg72*հfUBXmv["/iMNovu&_]_?_%JmpJ31l48gRB%%cpH $Εê-AfE8{. &IQ`SFE+0LB0hrH 8ۏGq jWӎ{Eb EII="C-t %eEJRx.x\L̐A&J"&H-yAE8,S'\Mx:Oi2T+XM?ED^y=">9E|4 s Rd芳62渷<(ct@&*""pƌU'Ȥ\"P$íLykJg %OeD&nD>P{i0.uVӒCqQTEbiKFZ"X_v:H|95GyN^RC r{\<.viǡx(+< Oa|ۚ~vܹxنQ̣wяE?\$00{4OfӤhXóOhlֹlޡlލ̭ \s)ffxpLx2J@!òH2 B.9WXȡD9{r{DT\j jd AL l1FuL#gCA!/*oqR2_푣uk4rKuf,&XUD (ęKF`2x.H.Z`D9U))Şc8XjD6W:.ZG ^̛at !ݪo?-e0&~lL3wzw#[zUˆXl1U UW&56%+xAw޾fy5Ӱ%;"2x,lDW%H [S݂ "cKއPUח>} usX)u uhx<\[EpS|^ rꠃRJ+C61)F9q4R+yj)l1Q֝̉ Atd.ɅdѨ oDzD_ES4)ȹ+OE!Z;(8Ţ%'kHJ~9!&m΄;fB :4*Lɯ-EU&NNbt% he={G 6YGË7gI`[uɳ)VV&L$:݇cUTlmWɜ_Me2q,U|cVn ÞڃW׽vNxpMB]T1!ي銊^ގX"KUw~] 4v Fc.ӆ{`AGu]qMj 0mOѝo^nl~Ǥ1wϺnNFvKn}ng3_{eaNNnoI׼ ϋB-R/z 7sow^WCwͫ_*]y̘y7pǻKqqc6L|5Y/K(˕C܅bGhXƺBK %[l+9A)%K%Y"ibBJ>P2iEe購?vK6 C}7[| 1:_t~^x j:D ` )l(XEBQ2Obe+.ottt1qÊ/WcޚJ>=hQbR ^k/IM<3/E3~3^짎6 .쟓eX1A&sZ[] [|yA*(z7؀g05Q9i#y}Ŏ /sEBKlvXʠ v'M \(5(ǪZ_k1PL޼ \ ¸[-4k|i"gſusb_9^5?Jo'xv2&4+$d7. rIq?=ӣ~~J4o1yKhZiqxNц|\nwpPR:R|uaMn(mJ._7y#_kj}~ I;G:|'W"y)gϏO욷 1hQj/k+{$ o%j_M]rd1eⓟ_]*0@:9Nz4}vqoMW#엋% a=ٟq%4[WBymeݺUfy)B[xv맶]ܿ>}\mUAݾ"Zrɼy%#ma+ד3jS_yrR'Mm=9YDI[&ul:[|<[a?:zӋSO/_k_ϋ/~.OrΙ(l#!Ư&_C@,ϖۋ_,K1=)@Ty5Ytn*Sԇ[R6m(YWrE:t>nHPC b,@97XF7%oY75_ 5^x6 k=/U@x6qtMMr̕oRWC? )`QV[bBХ(UV@X hr&z;}쫳8SPsSم֑CK%UmF ^<,6+C⸧3̞^UC'j>^^tKƴI۱ ~@.[ ˔ˢ?I ɇ'ex)dVTĺK6F\cϷsSg܋"G),e>go}c!|IzbR ZN !Cܣ jJW5Dwq m(G*TU FΎ mޞ=M֨9NX*[)9m!cVZd5XE(^\ķr}^}nm@ݣDN aJ\mw ?cr+NrY2!4~׽wݫy8|VΎRPzbfl$])%iţRSF4TʠOXjEQL%W>fVqjۙU@g Fݰouzr4cn~ZT6&19hjr)1Tjn1DL(EqWjFC=ߍ 9]J92EuV"ҺQ (M,F}uz3? XqY%C&8[GUR |G>)E{ ;@cRyoN짌!`!v) /e$[0> l\</Rr}ZEu_I+mߕQ3zq}>B-sPi}f͟ MBE1reLە{V#}rɍqLY RFڹ.|cWmn]OVnO@Kh0Sguh$\吊4qgLX{f Ş!{f$sdyL`8~,SڝI"!~7NA TzX!!v'} 3HgE7gjby8_sW4( ̳Xw;*^ *kmY,Q&uYq:!xԙR-BƸ17HR2Ht vZՁqͶAǷ 3;x&G{B| &Hg poDk]{*8^ciKzjZL7 ]>UE]W֞|@ ߝOj75tS15[ZĻ%p>*fs 'g'+n4W?j'~53 \%a~|7DN(%)E \&a)@ѹ] 8Ͼg~Of'ONE~2@GX0 .gBxB7BN;s|ucQg2?+MϻysA͟F13Aя^FhvM5Fg @}ٺ1LG #(*܁>e~h*H<~S)!|,T"̂k$ Ɖ:@ I`!Y#XHZ[؞"IlDH.#s]f7 m)tL?!YJ}x۞MUVv#7;F67}EŽөT ;2t7ѩf,-IQ3hI(d ̼"wn֘h}R=X e|ZqK;v6Nᕏ]BYܤNЈƱA^_^T1Z oǓp$d3-.;!Vm i$sd/%Y FP7e \<OsTs7:C6 =P+toŋ$=[/8Β|0wwsׇ3FiV`1WƬ2Z%tWV]e{Ql*ƅy O6#1<*.b<3!.˨7\N{е,j;?:p.?wMVZlus܂+z* t4TdƉLlצ&^Ddw:fMTPIaL,qbl{ b'$ ;SCsr%IFX,8Ic1`pYoC0wmxfCt!Z2eɃ[س{S!V +*$!1tpi ]!-lc+&M1tp%i ]!Z%NWR$qn7}uCB62u+DiZ:FYDWBrIc 6u+DY!Atms+@n ]!ZQ{BtutIV1tpMcV Q2ҕV<Yt(e;wuteJ4 Vds++CW7ٺL#+<)X TN#(ާԡs͵NnRoe$iҜ69K VYb R{P*iihf$;8]mؒC7 .;WeVDtE[VT+u sCW=6C+;]!JAZ:BbHDWX}0lU=p n ]q# +Lj ]!\ΚBWV4wut%f601tpi ]!ZNWҘ%4iiCW4t(YKWHWʊo۴:R9A3D+R삖 ]i-S +l3ծ!>+Z!NWRSHWFXjt:j./<`$s`yӿ@+7,qۓG+i+X$1; I3oOM+P3xf\?;`<EP_ؓ<)ebEY$͵W'9N$:EtRq0j燄b7{Pjyh?*>^QƼW/5:\ JZ);>c`Aȩ =.g|Kkb/gΗfl&+QXVV<$qG)Jz}qW~XzN   __;2].=v1Pyt#zeo1q5NPgEP*LȼUt$|=JK ~"z)O:ыrOn~kw`{aks_/,|ZahMm5DD?>_gQ:~~C&'!uǓhZ_ae`Lǻ,So}ce,w9V\ZP"uX͸PPr݀~L{i?'ΞEp70g ADI1ȚN.]M[~?6߱ޕVY^&[aWz3 &wKAoYeEMvE~#ܴ«rfEũf=4|nDYs]:[ܲx,*8tAedu™\'Iv3 ㄭL܃oaf̏|ϸ{&wZWYԖ\|݉-MT7Zvl\.3fE*3E=m|J5ct$!Lʇ[9B+n'=̱U u@1-^JAӑti*.2ѵn F}AK {NUYft ,#򺖱}.wFէHnM)[mҤؒ@CMhil"Js9=PVQ2:̅LC8fF2D&0n>(=~ >/KIlM\0[?r OѠ'+S:yw^ih?UDЏAڏez.5`o$o*J+=Yt 5MV{5Hs?tCtlYŗ6[&*$@B:'Ʀfāvg V[sC_HvI]$ek{1|"dh+*E/Z,t&XCOnS eqb7Ckc3fǫS ZV%U [+DYIix ~be SaCWW5t(mKWHW`Lִ1tp}fhm PZJZ:BRѤ+̘j ]!\BWֈ,mxJ2FjpF+]!ZNWQsWXkBm#f7NWsWGAW/nst]+a AK1M4Ks}zbu"a۷o :.AA՞.j<jVxoN(ӻʤ* XCIrAߝ/ + r6rx<@W)n/R_5[$a"' ]^:ߝ\-/ܬQrc\1]&;}S]w׸{0EBg 񂶣Ym"q.-/^/bIx#'FmDUf߽Ez*vZ/n6M89oKh#1^5uWxtzq㏢Cb_1&3qΤMW:7'e2d*y ZXp޵,Ǖ_ah;.5#:fEPvv8TkD [ ,RTKbT]Z,@I\+5RœDhY{n %;Lf ;D3$p'-Ռ bzL-x~_h`{bo{[@@TR4vp#%cM!:F20T`&B11ИU6F 2ϛ]U~b1*BBq8z]FB*J~߽wordCU,Z:d %SR(U-рOBr>=7B<A޴\s9 Sf0j$7.(]cpG@Rc;S\knws=i:wK9jsi g16c@FUyڄҙܒ`/DmT/ * `ֺ[7.m|i"%h/X']4/V[+Bɨ|W!'c#Zvق1K(#m/XXg dGh]G$:K+z o'0L'mTȗ? *}`#%R(- O9@EE; 0O#4/ h;o~lX4 "(e*9w%Xɠ-]^ [Fhcn eeXLĭd] Xqk ;: nhJ}/+*AM{*Yl(E sAEF@n7((ʡ7fyd0Wz,I_ $$ eFjuDtP, 6C Zb@$F=(aL! #A ԙ_n=}łTM%^1,4WU)*K;L'Q_r9`xӀ=v&^ԫ|o|X >a}Qu`mFL`-$ >:gAuP<9>wtl*MT$];-6*d`fPDd~.E. Ao%CJ$JET2R,zGpKR-Q &(!t_15lm;Bdm:^XTuaQ% 9աk4=!V1dxaQMZ uY QH$bձӍ ?y'kWمX#W`>^b= eDA!Ayh"! 5y]%@_!80f;@]Ii(ʠv?a^4Kq[S-Ah[Ry !zbA jhjwX"Ռ.#8Xx;:B5^Q0 td+!x݆`m BgQiJ)C2~ȃ 8vGyQwπEa2,Ƣ cA8gPH!(ƶDMRuPkV]?WHYtggMU@HhP*YxPڀJ5񣷦:x$T`#_h! IHu |ݑVs` ֛.l8X~zj _2Ɏcj x@^,4zV6 `[= >;Z,F/0\ ?k1i3jFq5R 2j31rywGzQF6k80)QC^"$u-ҐQUr#a?EtNJdt `%.(H H,@z|!(!=g:T}f=. >/`E^1H"^S:Ԇ&@u#]䍡"UA  ?0ˢ#ɡb$UcbY/xp`\Sclci0Ih j hN\76xk+fnQ_HkѬUVm(̤@LQX BVcږV#/G&&KZi.!@6(=i ֚&-m(-\j䋑Q+Bj@S A񉯯AY45PzB%aPJ↑$5oh JlnT *˥11DL0r";@jYx*y?uIJK't\-_ru3B?]wԢ`|JL0H5z6蟮ͧ~+_hTـ07(<>m9͋;M Ţ64_ K;={t \w#.~.nj}5_]~&=xyݫi7ϸ>绻?MxO_&3޼Q[ݏgj|=KۭKhi?+q뻋ɿz6*Ou4z{{*!LE&=F9ÆVj> LV|@rF|@$> H|@$> H|@$> H|@$> H|@$> H|@$>syc#} y㝚[9y38(-H|@$> H|@$> H|@$> H|@$> H|@$> H|@f&G=pS7(Rd$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@g ޭ~'Jnг(wJ:#"$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@gz 6ų׼Ք[\w?l{uQo_a1(L%Vؖ 'ilK@mK@sؖ_ }*~ٿ]p~b6BW֧L$tutIa&J*m+kQW[t() ]#]=`Sr/>Pşwjm~Bf6wݥ^?Ż6wF j쥉[^;!>mn\YOt;|܉i ߡZs> 3i{M[zmcWbKWڲmfҼqBWֺJ ʐ6LDW%vb;ń·,9G3oP~bfbђ];]1JQWgIW-Dtŀћ~ciy^Q-H+ݑF3V랛d_ۨ.jd+'O_}YYPb[I^??>K-8]K ٹmRp1z nwT|xKK>;&b%o1Va煅4DPN~r 'V Q8nd=fd$?NݗVzܐuSbjT29ݏNYr@L =[<9v,-0\fYZ`Q}iQ&Y ,-+>S<.pЃzM ae"z⋉;3 ]1\g+F (N ]]YG]1\OZxbQ97ؒdZ: Ι4BWCWXkDtŀMCWKzbѺ V oL{<=i=0$9U0MDWÝg&vJU?]10]1ܘf+zt(:CJ6Xg'+&5ϓAYjg:J0$,L+E .)\<Pظn\ ~1לx7ahOua(ʎarдڡd n)=j\{kB5t(:C2L[+K4 ]1ZNWR,Tk2u zt(# ]!]Sf*pG]18MqtIy3+qq5b.BW6Q /h 3+Nv' ^ի+d*`֮/h9 n~+FiULOإy֮4 ]1S'uJΐ~u[/29aA"CZ`v񮇃v6t" ģڵk(% >3؁d8*AN; vCu;-g2*y`U^oJ'ZZ4K ;у9־ʶ?҂'OzpSNl:-}k{>: Y٩qBW_;:i"Zmoɑǿ 7{Eb{6:ɛ80CR-EjEYop4%yZ5]^H,GuuC)6BZ.BƆq*W=ĕtUqs<\\Z:HW=ĕh`+\ RꌫJ{%e`+,\\ԆRqC\ц.g lpEr+RU=8Ve\W6#8H0T;}j; ZaS̃^ZW( >+k Z㪇rƊ ΏҎ+,d :4!㪇Bkifw~G\&"jW}UUT #t2W+Di:'Wa &yJNϔ~J{/Tj{J}׫O3iuYVi <[6Wƛ&mmy}Ïﯿrl]Cw p޸dY +Pq3#ݨ5ć}\}X?f!S(ꟁGO;?ϖC6fZ;O̓%-v{:Ek^rTVR/ȟbP+ں>:mwI;?'1sK- XW' r1a>ǻ5lkay= |#ڧn-%ek٫OC -f >Z~C85o'QiFxX}۷T )@e8Ov]>'J-tU>'NeNW;#\`FAk:Hjzͦ\uTÔIW2-֍ -Qn,Е>Tv|A˰Z|ZgmS|\mgM(ʻXkxѸ*B*F>jF60;Xp!q=P7˃#@X(G;Z@RO-Jz=#ڙ:Wq%t/Rn"nVE4iU71=f\=hNB+6B4\ڮHĩ Rg\WJ4#\) J\pjOWhqC\iAeNJ *8`\ZUBJBUqW+ \vY&2!ȩT*qC\Y(X6B$\Z|Brr` p}ұulRDvR)stG\y#\Xy\"`Z|tE*ہq2 Rx)Վ]\&"ZZe WwIr5)jH6j :Vhn8R}.NpzF4J[Lǩ8` Ӑ1TӃb+-\ WօqE*θ! <#\ஏaχ+R+M"g\WJI%HlpEr;V+RـjUp\\#+R`2i0hH. $+RsZ`W(ؘW$7DvMv9#F֖S '"f0HjKWRٌ# $J6" HO>"y!C/q4X!l\ D653qq#SX m{g+Ded)f#\!Μ99VejDi5hX+"`$Pt"ƥ+R!㪇2^ pEd+ "pU:u\JqG\YpE;?=-FBs"*+O`|`Z6ARǕ^>Stlp&BJ$U]WA+hZ\Lj<_|~h۝RJ350RZk00)vRRʩO+7)+=IT+iqMK?pyNh[r1lu:xsY.O\,.TVNb1Xkw}z*>//?n[Έq0R|+ N|qnb9d$"f vG@;$yN O@r>_5NK~2T#W!WڗHg| W1]_6cNp6t4% \\#A^]|a|8|Z]b.AQ(:sz04i+eu:_a~EGzd7<ֆT R|au< v26֋Ft㗳t5xBQ 8!l~ɴjzRӥK|); {O|1_TIuElM鏫s ÆM[WVU1໿~FNG n3؏Cf\JSZA !*9.}͡ -)i/0 J42@{7#br9iM#.\3{Ȅ̓fn[f~xhg{d<) o-||˺}dgyQ\]1(xŨ z%kԿe{WR1ǧ|꼨ep`ydߨ}Q9yFScgÍ[66?sd_6^#o-~$i!2P:D/Qb+e(f^FčjNӏʟMC Bo kBUޛrU' TNRTScg]D* c9*9j2ܨ4֭ࣽgySG F{94v⡚I(nZTΉ#dH_ vxTqćE@*6u,,:a.gZQZ.V̩dX4-Fwe(+[k@8)H#!kpniYs-=qu@?ipvBz5Yn֑9m)Nb<._ iiSrsUSJMKu#2/M2ڍ|dzyw4C*>I(﵍{ScD`9ĵ-idBkmq3EƩ;7|iLuNu>-߷8:klQj?x[Z_vK7jpx7䌦&$EcGe1C nܩƉ׿hLBy71xryFcي6AmiNO@Φlpe w[4O~^~0x͙W\/]>CߴvgP;9*1C\ eSL:$f狍2v- _+ǘSٜ\~QGs,GsQVV!c1%B%6:/غ1q]{(7բQbȝ!J4e+AKkE ,SN,#m;z<nÜpyN3}Oc{-/eee4A#;ΦKtl;WM!ĬQ>/ch%WsYO.f}@ '9ƬqVn&Y*'6V'9ly`(/dG~"{m-J0]3|? C{~^%e<؎I&/)Dz-PHK6i<yy<۫/|xc@] |TrNVJ)<bAi'9tmV ^/\1ʑ]j B rY}<{r{'Kβ^b'$:YB2D\I*ԤE!4M$Dp]'w,u8tFֽњN'u8Z>WjT! ,15Gsq3]AX(Lq*do<洞7A+OK*,7.]5+tY%v/y'T˼3֐~2/^5'?0Oj4԰+{s@*iZo4ߙ?_wZߦq+dۿ;j4[Qm9ؔjILvpb: 1!^)TAı) JOK(-N++d)ClZӱ+PXNpUѿq-1Pi A چ=w6c:)QhC :ܱ‹߆cδ,F s _k.0-1|Y:WyO],_^7y~e8ȈHZ"Fğ߿$yX&$l̘R!Rs٭گ7iDZbがbvD=}X%OllQ0/PuCXp3h2/^ˢ߼3{+֕d%]3F2)7-pnPA%4?<xӑ(*Η' ޕ#YQ`ebԇN|Ptd$6=$[O`(~#`Ȱ+5cb، m0*5c͗7?a,&2c6;> 1<&pIWR(17i*i/*B!偓K'?L p`o7c-[ejOJ^Q؝8Z,Zk8PI cG,A R2>~Wuaս>B8ds[ `6wԋ +XL" P ! ۥ Ӝ#~i;!$T t?e[9f(t7y<$ Djn~idg_-IU$)y&k/PTme7/Qg(^HIJ5'[[)E0]F)U ~w>rGRH*X(,aوE-fT` 81;MZ,Cvt1}=+eCvL/-و@&x0B`E< /"aŃ`\WՠڳWuO?00P/˵;t|$aVJȘWO'^֙ɋ ewwEᅃ* /Y:zV~ATax[(K"` Y=۲>C"{)Hԡmot! u+^P*c4,2~ 9N! eZgvU( 70J.x\ӥ$4.y6" ΗsJbXC)9d6;zܔ:*T,ӻ JL~zm9͛ϱrdz)2e/y-,YKF |Ɔ4"Ux` S{&RJzէ#9sE`5ӏ Ο@JRpzhHmP@i[Aj`{k謉'qaq_ }֌wz0hg_yukN! fXဟ|c&i?AߑD(@W>E+7 mj@x+J F*ti,XNzr@N[KNLjmtX@!,yf( ;, .wu/@wd$@X{t ` ԢۋUUVWBH. `u <^]6Z=>Vccbە0iwxyf3M |:Յj,MӔ3En>'.A-0@D@&bTp2 [ەܓX1oNi2^A׋s{Qt e-O)VT*/4H-&G>\xG?<$e֩@ #Ju ]&Y<*@e{4K~‚0YO t5K N5 J`&KjƜG"юyʍn¾=f3X$27%7βq K.dbEGVq5rL߅FgɴL]{w2%rMdaHDb@(d&IcCWGK4AKC0]jg u#0fL@:K\&+hm3 4SbюzdӳZ>9]-/;)Ái{إNO7wpLL2!7`]L~>Z|6;e-Ӄ8Ud 91QJiTJ)U9EEu-u]e9lʊ ےKS~ܣMO=#)UZ$K$ΘGH.$OR@ie3^(a;uV#W@Wwv;[&imb(Tl}/SQZ9ۇ鴢ҫ|d}{CSŌApa "V٦ $I՚!œ(X.b{+vs|/"69w04):hȱuP_}4v^6YKi{店8b}A)&q̎.4w hHsq򴠋K%>x *HEF`Nf|D+~w\uR]Tׁ #go׽t'j6{tsJ>Dž%K~F=~ltqOD:r"/PHF}+MAO_|Nhh*,T )ЅF F}uxf.۾y疬U].2H!/:n/Qla>)&vLN{`<^lo eA7N#Z_XD1S<NN/nCV=Q*׫RtY__^C_gzPe\aݳnmݤ/B)qVǺ>IoʠpEXf0Ak4 ^OObY8nׯ&/uP%wpp;CғD0c0#Jea4'Ƣ^P+ 6 f[Y6kx< ZXV &U5_u79?&y}R)]ϑLPj*e AHm23_,󠻮+`zRg`M-%.ͪD4NJNC[fyrzPNa'ipy 8Cu=+[ޠG]=EFsn %,A-%Im>INtV+92G総ATg`?i#ND2H656K)q=..$C151;!ZNyuQ6e.MLMCת6bjQq5U@J 4-̀hMM>C2Ni:7aӲZ9UB&hhsG%p;&&eO'ѳNۧl!#lM5[ +J_Zp"$'Y05JiG_B(0LśWh7lE|)x4UsuWlਵuz^nF {` 6s|r%b$Pi dϖ#Ojt-ǾisM_ Q %4QP,20N3LX|u-E`78K%o\Lj 8խuz.dToɛmo'=>/S Qcɷ~JsYJ m~8i&Xj}Āș W`M$Ei&e 7Sla_ϖ"Qҙ!HBb%V q |4Dusے+L2ĥ3(S!Eu Kf,;]s5|z6 |C5l '%[d |O#BLL"^ɐ@Xc҆؆ǨؽH\yRY8 ʴQ`Ц·0Nu!i8!э%?!`Chd問6l:i1[n~V:+d }۪-]Ip\6!pM Cmα X9cen0WI S)-!|uhu G %R; > _4jM%A)LwkƎ(2&ESZ'C&)Cp=!My]h1 ?%w3S-2V:t xA%*,F:P? Zd|p}5}6b}Jz ۃ} WV:ݐDDblNz8*|Y*g2HcnQqf:qRxgGQ(˃v؟%g):ab7 <H#}R{`W`kƋW\-eX8 d q~ށKɥӽpa@IB3Kң%!:)CBSqFs_ `|`{S_PL ws$*EbunjBJ c&SM"OT%sXҴc ž,w]>Zd\`8g0; >2ܮ~.7o<3ofD9aagERpN!7`SO҈Q^Heuyv5__SRmȸL(=$„fha(_esW9%ߓRe9LHn\%%Kgfmy#fKơ9q;PR,ح}qE[3 用}|mkm/jeJi5X,>,8*RYP O&$ I@˫u۵1Vf=i f`/E QMXWYI0|:<%E L0 ?r r7p$\|ˮɴ!3YvHSS &֖%heԷEP 94qm IlFgF?R0ԟ>4Cszdh&jXT\xj?eMrihz{\OnIuWı 9LXy3J)Ԝ)ˋ/wګoDȯ+*0_8X;^G.=} 2e~ֵ^Iֿk=#Y|vuxa,−i_J* HϻaO/Kv@h)z%60P+Bj8 1bb^tAWWȼ\{n=®ٲ"-87ӵYg*sV&-Eq .@*-L(9j~v_ oJ!!v/趰kQahzLkW/vng'5}\t; O&~ XQ"hz;Klށ\U_1['Ar&Kۑ5HT|6^vmӣ9'S1 c9XǼ8atl~9cԟ:,]ibi{tj}}m2^U;GՅ"REJ]X^{\dA!}dI " Ţ~Fx+:1 WQô髗z:) ȸH\k}YZLczyluz\ZSDl ֓՞e QP>ר/a!Ǿ&!rJꏜW+*`T b^RoL厬,=s שbu'1fE2u5|0y찺/QWQ>CYMl?aӼCz/zs E:8k\uPe/3 o\9uZ|}YߙK:|li'KGEkq5`ypaRwW${ыwyw tʨdņN:W f ̞AQ-l ;ۯ!qMh%~t;ЭV ux{4C$0Ϟϯevs/S'34n4F_F&>Lb\29Q`Zo:\|^t-2A0L1/OñԽ\ SɅbm̅, ctҺlXusvx%o<}i "uW>t@■ͺ٩㇢ izoi4]Lzb&j NGү=㷬S&>W>hSs4TM|ďAwTh:>QEnTqQ`G'?ph⮘{l\eM۾Nϟ:} ^[d 0}ayw-2~1NxOq^J} p+zﶔa\3}{Q}Nnrˍ5K+f"c@{E2ݨ?rk'oF"D}F?oW"@jߋ˫=f2+qPPqd(Ϳ̘N C 1n/FG+9=Ɩ5HDZ/b#Jb_j_խ2_/fMŨȫk+M&ֽ~@}RF~ J&46"cvxj|O?EXd{#Ѽ7L߹`liLàކ%J8?-[f2B'.NF[Syi{?(sK P%w0wV3|YFWkaCc `b#0J)'0rmt;YQ[?!!va(_esW9p[ds/IOd7:@W-~]`8ըi*FN8pBb{BE?6fr 4)O= 镱*\XQ|k NRN ?Wʙ0}ey Fbvȫ$޲/{ZF+VAjisd8ѓ ^M{:bd#"%=u*oIRp`dΠd侪AR'POiF/[J ^sL1W,{F`fĪY^~b&j}aO]ԃ?A9qtH45?& 78"J n0ӄ$|u:N_ qbkˏ2rINݽLvY_KHT~;T'J8W7R&/*(aa*Ā?Wl X- ~~' yYP^eWj1/`$b%i+6-k+xrcT<#d"-SKMϱWQ"RK9S@ZM)VJ=-l@u\-UR/6i $ɰ[LON4h2Bkԕ[ ~;m9/-ݸ;61R+o<&{OsY1$Ia=>ۥ"hK+X2^~d>ߴ‰R9_=vIa,bL‹QD*[B|,@KQ*|_|HuK CFОCe.,VZ*–L8P]ҏ2R(W?L62!(TUkjw/7')guH,(d&(N F+eB/_ 7`G^;^{d^wTz Zb>/D aR^_RWOWʜ_>/PW/2&^N6B"$?3 'pyF pUFTtfRaI-Ũ]ne/4`IIK$akMе0 ;uO"cH@\pxapc#;KiKsJcѓ}DAȴ ET;'?+IY*.%:f\,qN3у{&!sYJI#۳ ŭ=0@:7#e]{*K E7G#R_-fK7RElEWwMUS2R!z'y=݇aDe&JhtO3]m3!-Z7Wбb+*ZF^zm F1w ;]@|b*'`VB"Xb䑿IM]/~$8qkl1*L\p4h/soj@Ѓ^k3廆0:4jM9A<m$~%0gD+4<ۋA,=:ijH(`jz54.I+GM-ŲYџ<eи8;ָѸTWG\QJϥ+cS?r=g("fTߓ[_OFJe|_l{%E>TWzK.8 h[6(P688ݭCeL|]ICFMT$&ܬeI)nT~EU[C㝔RЂgxU`l h VzKq#+ؤ)xV/P4Kߎzgkh\{M k& e} x"VyGypVMvo@1tJylkh\\_}F}c>{ƗP6*9۫s*:+9ҿs'`\tSO֥?Ĉ_F*P+v{S>U54. UUcy揤qU(k֕ݙ?^#54.Dsd¡;ծuH9W.~oIyt!=Θ(uU8nuq9 mL7ZjA< yשi\;i#Ղ"+ӠqMgtU4$Q*vmJimcq OIN H HIE  {@҄K޶w}h81 ^o Zo8a|/Px&>be\[)QwM\wE0~ܑ7fi&[TkwNf_h-H$k33XݞsSI%)^Bj'M싂#%UDKbG#1ݔh0*@1؏K$9gnMf7;OM3a-+,)\^pp?(> 0uԺ/DF*_nj4сUFeȊ92 'vh%q㽰q^Jiqb/˾<셽SSkbp jvx%yw`<0n(4YxxQG`]bmy +k]c#ޕ"3]aq"*C16ꇿ{302|@&u[P{!VzIoz ; h fݼC  !9ا尧b:A9վ}WHl^\2Z" ܌,?XɄ6%k59$lw4؄2FUAKe i@9Mʼ7Aʚ5ӽPe~i nJY qE,oÀ}׺ӭ_$^$K޾.1Lڐ2ň&d~6s`]PbT׎֖ vf<^rTvd!;35@I FZw4~+d]9F+Fh׌XD}6VEαVaP]x{I%٘&T.cqP >++C΍۪ 3K2YL Q4P%i}͇s_4x|>CjrBs%e5gaܤ ,IP eTLMn׊<kL DD~9`%hBQ5`ޱĥjhl-o.sFYz m6~<` \#&?`9,).){rdR"TBYjI5 | 2[xoQu``K.+qq% 6zDHSH@wV#ÈEV-_;TCѬ-\s UiAk'U )]Z!ӎ!n- HS2e Wpʘӭ2ɓWT\cwMl'bD/$JS30{]Fifsejxo汆4u%˻ߞF|B94\Z"rOBi)b $KB54^9t0O?^]TqJ^Jz;rA%p8@*aCځ_c$u\Kag54dzι{:EwFۧ l>wöF  V8W*BqGZn>-s/a.+9nqORs&`:pA`JTfn ^4s)E Ư}M ,222av aSW:Eg#G6 Eȷ#HKaO$ z= Wh*{I$v㽊 ֭<3ybl P7TbPFM$wTۆƛ2pu2fR)1V=Jj#b̜2 8#~4+Ht2m 6*9De:O2FEnΦKMLT>=9L8,1b:[=e7N1eMRS>Ҡ͑e8"tV0]< A3 =?ɵxIDg/DfsX|{o*4,R, +Lٵܙ4YwWAzsT8D[U#\n$(&۠Elvh;xw;DV?\0돑x<֦a~?|Ӈ_~Lm"O2p%'86#BQzt4]%Xq r\!9NSri5~ş愶ZX+5:ЍH}CH/"E` hDZRM;1p&axpi”ݤ%ڏ{-`* QN6BqECJeO,k2|Lۗ]fF{~[*8jM~@}Ąc;dsql9k U~?s}GU8cSڤ!Ϥ01&Bi*R䒜RSx+?$a \mTdVÄ0b.wO䏈NSa6,#lyS&2\.ܚS-eRuks}!6˟zr\UJ 珿T\Fkyg]&QfL{ ֤/ϴU }pהg,kh{u jcK-#Dz -v'ܛ2gR} "Stj෯W;b1% , K!cp`X"[`8,TY9ks St |Fa=SZ$-8-S=f=77Ŝf\ ѧAqdď@q}ȁcb+*R {$ Њ ʚBSaƍ"zOqs{0`RX3i!DePrΐe>fҳ%)Ҵ̊[owBbSoP @(McWOʹiPbpYLC:1)$fXRK1=bP)KíR\aUksEO^DNJJ9Mdd>-B e2`T,lB{벇aյyT(=tt؇EjPQܪ1ش9췀Q/>,d#%ٝ(a@ڥq*. g 62cB9jhCpJZƢgRTݝש؍*F6)nRz{yXiW?Aw#s˸`IW!3(Q`)spVh’-Jcghk!SJ4O >쌩I'y*]Cxy`NRsd]T.0Y"fγƛɿV/Q)199Y*>d=z WXxƘɴDki)q"MCMsL^ש&>?Y+!\ohM%hoWo1&%ϵ%+:.F\剛 I: տ/sLH~Jp췀Yr5/?~!ӢiXpFUeBYƒ[1X39E6bIt64KQV4kA] 1*a& ړYHbkq6z4 + gĆs-#%p946i'<\sU ݎǣe8-ه?Ee@60o fl_k)k[fPزsu>l<wCOл3d݅'N8-&nш 4{ ǒeϠ_zOb<c2Y>Pc;eVAS~xxwm.m6u0K?Iti~oE/-,Ք>` ?%nSr`JZD(-l/E1'+U1̗ܽCrw_^zQʺN2눺i6Ay7ԈQݯ:??\k=_3KN,AG x·~E]NC,61ifAV\T}M/H0HeK26iAcyآ2?++}g]{n8i>-]yK(2-8ɡQϩb@۽,'Vk(=.oN{,/_ޢHM^#Gsns6/SGETyd PX ́x,q䤖ȼXk$~>U]oZGm'||2ۢ80 |d$TFKfFW+ 7_pRE=M:9z[vZAu[ç)vAR)F=% n;$M𙵣!,YZEww WݢubV&, =SߕmE0!J@e6,T4P'h,ΎJ?re>6aZzS:bWP&N1fޮl*>U~rtl.hngGtc(&=#j2кW jԘ E0Ѭ* "ʄ6R%2[,XjXπ48<5"7f1W[aI6_|Yzlg L]*mPѪ@͢{_ͅ^JvMn6jFov}OG)8W 7nW&i0 ۟gOoJ >clx[>j?oRV# -)q\pZ&SsHbcH]"wn'̻R(a\s+t,.̴֤q±,}A ͜O-xud4UO'K Z{t3ʢh(il !kn4UC^ׁd 2:'A4 t'n-T?2(%@N>v5㘭},d2x^n6"s0DJ7)Ɗhk: MNYMsehGt fQd"bIŝ:G{]3u$+aPvܖLHMd(M<8Ax/TqX6gtVA(#ez_%nPqh3ߍÝ^acB2GkTD&KK:%: ΁D[MPNs(8|tN3 Ɣ>etʉ@2H9DAHNj\ €ĕ -6enj5tPEP>X Ԇw6L r1|РKCZ(@DE"Qv1Z2UYtM[U)fE"w5ElaӦZz:mtَ Z0T'6gimPtPS^ FZڸzPpF Ԏtr ~!pmz뀶G.!ARms>iËh(Q -&=eȔ2$\?\jLic?w?Crw_^Xi osP޼!u4/N4熆x 뻃XKV١,6z-Y26TgP6Amنq ƥuQC6i-5-,퀱C6xǺksvJ ӆdsFnARգ<.cͺڌ}i@ ;ت K eF:^m2F P)Rm #!aLPƶ<FZmFK`u^m vP?!jWznE`~lֈa+ݺ_^``20uO(K[~F 9?̚BQuTо Z|h~X lI;T!P+^V}7n?{BkgTY§;!-[Om!ݿnxWV@-qP, VӾah7Ap*<0I mAŢZ[l w-+qRP4I (uDHs.X5(D }|\v`6i>N?O24׌\r~P`=Jz+lﱖ~N(kmD PxX+Q܈}+Ze =fҺ80 UI҇?Bؾ}օO0H6 czZC3>j[@UbWR&{*#§6$b *Ɓ8=B#'cVfga)Eu x0 x[ҹ,CISǚ\bP9%٤`)Gav?Ӑ~.ʜ?ڔneBP >y9Us6[n%H AhzǬ8B tD˔%2 "VOg嬜d5GԘ!ShL%( <1p#d$~edJrT](2 #%:29#^9r (U TZK+8j'F뎼_ &%aɊE`=9.7R!(D:j♄|*!UcEG.CPxhv0)͉'K2.TQ<ɦ*>#OEf_aA)[d=$&a5i..5h?󁃚̜lԄ(IO"㦖qbㄻ44ʰzwqF2+5m/qx .p76!Z](oӺ1^'FQzOδ+: "Sct`٨2&CPsS\Oir &D<6㶗*Č2$0S^s"s]jҵ0\P? +) ?@/`~S =wirٻ6\WTz9[c@UV%*gSFeYkQSQC)[DvLùhdq\\G^^\}8:Y9ᇋC֒u^[?^_1kYev;%]=?,>pcҢJ.A3ڎi1 RC8ctE) GʻVB|HpcnCvW6;g`sMp~|:QOv!][Zsqvju~qv .7SN;upAO_5Fm{QzUwZq΂ 5Brw4_OTqS/.Nň_KEB'ZRe%ޜ ?y٨Rhmj::(ʊ_`J6R:oi~Y#ߣ+堞t;z_f(إ|PfW3|wя'ビ<7[nnQ*}>A ikZLg1bpݞY5;bk 8x3vـ]c _ z1gTqBf —1JYj.438.\ʓn|yEAHI(OUpNGJ޾hz}%^XwWyj!fw+=֯ͳfvC=M.sOwD)ݝu9'}u7s<N\s1X3?j )w\p!&_|k>C!zKaѭvعectkkB#`}X[ctk~sZD@4~QCT)&Nj!MG c\5|0IJ. /+t\Ѡc`U_*oNu\D/;tnW25?}E#\ 9yYvMٿ)= =i_(-fĎD޾8b o~X+ƪqD3D>L\Ac0EvU8)}./u>i#bIVy3"gX7o~ԧ sSIU:8{SR}ml4*T$bRɮe\ UG9PaUͮhRU;OU+!Ԓ|hծ_$ȎuNk(`9%fi4n{􎇟fCXϊ:M:h,<euIרE^eS,>Ud_}`&NѷY[!PH558gv8jq:PFL:H]lcŹo,f'H 'J [ER.*{HB_Yh+Ja@ac[ U(JrVl+Nml]p.>]4DQEU;,lXDksK9Kѵ)^UtdO&1RZ/6阋rb f6=0s8ܧ2X6,.vى+&d}3qi#MZ>XdK ҁ'o)*_ql-9Gř\ĿBg iv2m ~Bzc0)tY5?]59[HYE&k\;eKID Vدd38"@E|Pkh-D4u0:@?>M6 FNNT%&ā.stFV@c,1MgW+I/lXJ *a!4ղw؃`jۯ48ӌ >8,keђu^\j7a8CO}Ե& Nİ'.3\3 6Bq\>c SrJb1IATR N'Mӧ5}j_ZӧnSm4fI B\qKO\Q틄@ʐ"ƴdCپtK$R16SFxZI/$O#riZdWXt;̸TNVNwΏ/R/- Nh01H]4Uz.`0v[ 9jX2r&Gb]06&W&91^A`صZQ'lY0"&kŻ,H+yeXN"!B%WY+wP¢+84 I zα_bLNE]Vswx'N/+ &8vo< BK"z! 0a*M2'礅F XRB,Gt5]VPCqvq -|\j TU:ĹR~l)!苫0gd-x|TEŐ`cVjՂnq򠐂O[ެYM-ώð2n##wۘU{ص]׵~Qa&4ɝX|C lz/fN "^CKWwOe Z̬ < &e1jͩ$Ղ FQ8B6 լ6'i阍x7susǮ[<*v:F`X4F??p̐nN%i? M}ZXxÚy!vtgvO2|;L45 dm&lP^t-Z. 9j,WIIA ykũ_8[ND;]РNC/>@:@vI_w6}҆O懻KZ<48Mt&oZ ѵd 7fo7K5U- $C/6Aʌ{=ЦZG~Λ6={W[?0ž>iS'mM}O}7fءGbg աZ:*ĜIAr2Bæ ֯ ʼ~ uGid]k-cll1^}p{zV%s?sS3"Ǜ]9м; ?e_OOs9꾶Gvn&O"3S(u43p1uw]1amྡྷ#ë%csr9Em7n{ƱW'CFLNW.OhYvl*@wH;֫>2E#\dhj&V4A>!ߍOCVB mM};B8ډS1$@fR7Z"rV؈41JJ5+ґ7q H/m3ML) [ xQO<[>pTK6l'{qaQ% &<Ď؁<^Ps(T!;'rVgL@i*nߦu nX3.0wwCh"V-p(4nC!' ((!ijjwA=$T7|B5,Y!hBuK$DzP!8ڈCAQT>'JN Q0t3s vx;1o4RnZw0tYY 6'tJ{'dY+^Uq'zSpGs-&YMz+޿)Ӗ{=_M{tF Acr`AQST ec|gV\9cȨ +>LE/&GHҲ>Hk eVYH㐺nrXb>۔ͪ+Nc. 7rhYWm]'J\$sW$3b|lъ|E莝N8Ay8Z9`}I/^u+s%&DbnsRngx3d%&w7źpbcĞM`'gs\mB8榨Ĥ#*ʌRDJcmE1ՔѶ%$ sl諍BrE0(}\IC7i T(҄4„|!&C!T? ޏ{'?v}Oc(;ըԱ1H ^> "Ap;׬fUؗ,PrXP!c%AA =~I=*;n%o /)8n^(A9#{}-ME&;:.FĎEbyE4yPg_|W7eL|Scd6K5ZX99Rk ՗:XɾE֪@'Ί- OKX\h13o囎, 6۝Uc5}Y+~XdGֲs*[1n /34.Y ggv}1w+pnCnϝk/b&eݰ=XSP=sU =RԲ̃l}ha-譤-Ɔ)FЁs5G1j2ΰ=jM$ֱ ?)XZ*R!;՟RR}etWs&ÃVˠ-m(ubkm23rt-5)Isi4D[cktSyP?/[IjpH粚ghe ;z<12".C\.M8QO=x>``Ó9I/df}soDޅklArpE:֊U hr,<1I P1zVW|>`/u>`/#D˘Q?Q*}۝+1Pe(l&YWmu[7י_UMٺVWhV歝ͼ5 G[POe儝JG4͝o :ٟtTQ sbemctbcHueOTtS,gjג`I߱%X1XDyֱm}[GleZm#Uꊝ ,6[ZG#݂T]Ov0. 3\.#L# ~v a"beʏ"z)y۶yz h@O dvp>=ጝpj8s4y,, g = ݅ I?4 /#w}s/~h&,#[Etgz "K=*;܄Y K/ 3\ u2J1B\ط@ ᡝncT?h6/y$.E IӵD1nށm$xGlRd8`b I?dWI|H(Ŗ+(Rhf~EQIJ%PƸBi\ZwT8[k3h=Id"08C,'CI%ڼឃJ%0b?P>K1BIv7Пw::a%л?lvQ,TPGmh&&&`c@`W2KFF3u뺂J HR"vRK=4epF8O/VRY?{'iG-{';\ؿ/y-5 ׯ^Yxlx /_*_ =K/.˧|S>jdB _[9 , DF He4J5B;zGߞplr8Mo.9a|MF!9qG1r&n"Q/RoEVK~kEAu- ʅm1k~uLWnr`W,G5J . k!K ͱAM_WȦ`Ω{&3PG3{^7jJ5|n&.o=UQ2 pӊaY]E|=$UC}VS$MTVY1C ɥ@:ʍrםHx֥7揞 QS6$-1pQ]X!K6>ӧMU4Dl }mj}O\/ή0\x= 珗_/@1/_b5zY'Yg J@' V}zy1x}u 1{YmF *e "[ϖܪWWlVU-8KJC3tt+Y")s\QUmXb#bUcMPf?T<VoYY[9__m_~EhbLSsd_Q{"U&^w%K\ѻ^+bCXeS͏%㯾]]! 8'#ɠd=ҡlȃ^6խ"[-pvڷ"$bVsǵ`# -b^&NQ+(&.%5)}rU%*DcU?THX$>ꎆp?.?2AH%^lP(f(Փ%Lٌ1[kcÆ]SM^[)Į@,6;BJY%rꪴp ={tqi&h9\n"a1<> plV;>:gu鏪srOޕ+{&\䞐Y1 3;hckZ7G$F{T>u$/2rܰ"ڝُGSfv~cϟx fᔁiҕ5Eɪ]N]nəv@d;1٥h pd+4rnೲ}Nهs/5quXLy޲œί[K!WbvZ -hwNPj4c gʶ!(-`r5j(7e{2 zd~"cHåV. xvj]xZVx}C/=a?FJ؋؍`&(uYKGu?x&]̜RS:alPKA7IiR AevJZbwf B**P*8hIEyF!kdSy ΠJYwHDIQ͇xک.c X\C*hHbd-PGcj8r_Fى|߈=nmciYce{vIQT]MuHDVab   1"σMaG nO KgY-p:.}sɓK3ckC4̾` ? qjuać.~y2}ѴS}<˖nQXh8H!zRI+B v.S\-><*&"qRמ[N 1N"Bg!sS9G6pVSz P"ܝdvd[F/)TsYvTNfĝo U*kGm.X/1;4+eM p$Z0Cef rLصc{ [+T=VMl}p `Z&8AN+J)vKZk%-cdVYj4j?+j+t(}w](GciJFxDc8o %erFT]z\kÔ>vsɓ9&06;:[i=S]${j7LSJ/ޛnTJƌY5^^WDqW$,$S6O,'JpTؤ΀^FO̡K9ν9V5]/Jok#_gF¯D 5\D3o3usZɸX1`CFBKVcNMsiErT89O\+ΜQ䒺dU\1ⓕEV 5Kw2ZΩ9ߒ0Z/85DI'N=n[ƈx;u_x􋍪/:g]+žxRxFVgr6{N;))s)M I=0}"&:9z?uEGɕ--QTi%i[ruxPO'Tf>:\t"00!`'zOrqR뒢bEeiLp%>K,L";I:UWw7¾c>a[??iB divIKU:Ojs J"di+JwJ*%v( z Beۛ5yr%P5a#-5:&F!%P3pjj8e]uDH)1YF7< #蚻R$^u[nX͋߃fEP:Ck 7vUD{+N1?ls˭Gqa :~->5ܠ[a$^yeAV|o ~~xbZoz "7"_Wc!y+PSŖ8U{)8Ū{qi9܂:k6'N/W#aOxCB$Ght#cqD\5[~~oREK3tt5r?o <{б͐+Ҵ4d׭jrV ݱ6A{Geg5Z`vpZDsלpER+nQ1? O7_8B$aof/YF:8?}x&z&)sA)Y$sўh~MʁeQEװo=^w%Oo5 t ARdI+w6$p\_w<Ցѭ7<]SdI.Wr\Q,g>lP{Qc3>{J*. H0(3 :^`m>|ɓ(b'镱F4 DvgSH_0? &1%OE3vprlw}=Œ=/=hs q'緰wImw9`吃c;T&0c3*ʖ`;]oBQdpI))9#7<IJP^> d{JZ;)Iw*ӇK2E+&V=Zk$6(|2M.V99k\d$PqWSUF/lb9%wטuxV8ǀ?Z{\ckaX0/#t.3^]ywhCӦOg}+NIf͂Z8.I(rءVΚK,n#7vɹ?8htqcr]#̵F4$]X7z;%K,iQՔ7ǿY0 ,IRb5'_Ѯn%-29 K+-6F9ΩoRh'~7WbIc)nVsOp]մ*R҂R{FgmKwW\ȑg-i8Bp+_w/T{p-m[= {jO+}J͚5~?-j7OvDܷmRuJp}ڷЇKE lV:ϾJ0P%*a vmC-6GJ6].'1҅fAaoƆf) S^fGp+[P:#Q.hv{Xk ݟ|qJuTUBbw; Uwyk @}}Ey"YJՠc}+WIQW;L~T\[Obtnk UB@ע͜I7gMxyx|‚#]lI_EEW) pd97g$ Hl[O^t)#"Mﮦ`8x:n?HiU0N(FIxW4*C@́`T7zPFnfm|T} h %.ws܍E@}UGLi\KVf5St%NdJgJRYoh|7ozدO!%^ş,͚ݷh_E^-H9%Zv@ʝB?WT*wƀ<=2OfumXB2ذOٜ W:ذoF\ Nj4Yw VB'4 |H>)1od*_N\~?$:.t>2hhˠS1N,Uf’1]҂dr?~߽v" :<2{ VU'g8d"a2"ۻl2X.D܏FZeC ͖Xi^U6 F-y܏Mr۾Ԭe`7=)%}-2K+vYMnLJr>ogK+qy,`9)*bRߴ6EX3̤V(noGci*yEzyfdsjx+R)WyG%++*wп++>Ǖ/U5w7Ǭ]vST/ V0\. ݆ C­@L-`'QZrA|KsBS:l: >BԚ"E(t\hRtYa_݁w8O;Ё2qX y gH&0QY˅(q&_jứLz%z ~X?8܎&;V3ۻ~H$-DeBL'ܱbLjLehR/^ci:dz[͠k'M2ZVjICW+g_a.O̥DL2xʜ?\2'uf- ®aKUF2q^B^5"/EXd>lx'Rۃ۽++{~+1Ϛ_16.L)Ey䐐LrN&%,5Y@X^LoZRثϡk^$m4@Y颴kIZ k&Ǹ\S ‡ ]V%^'{evf]@^Ӊ5UV'P͆}ck i!_hnv-~w%rI_?*8rJ7(8p^9R-BH%NxĆ \RzMAk+<mSԾ57`;| Շyp">uAŀWVk! Jm)T,TǓl ۈJ;X ~=O[,`Mbv.Aq R uݪqsz*-<{wW*ǭQe37W?ft9~O/_p;s7j b%GH!rJ@=OWQqbOm  _/|-Bk X3ӉFVZoIm9oXު^bT:lr|$r6IHҶ'(~:}#n4TFd/e% ()-\t4~Gwgzǣ ƋW{5;W-f~܌'KYM.j@@FlBtJ׳Vk6H[h6m8OW_}X֓5tyسݻ4|x.YQ]_-;("x#8*.;4\ȜihR&.4(SŵYa͡(w"=BBȈٲ[AE * )%Op$מc~,pR.(G%.%t#r@ *YEΥΒUBB.q_(-jlẽ2x6e]'El-=d?oy4!{Dyҩ'-N߾N W=1MNXx@k6OۗD2看uuSܥF[ 0{/mpMB5{(WOuT544ڒhJGj_m?suSnEWa]Gjxn'1y8Q.C瑡ki(4;[rv4w6ƭ4!gVxOV%g>:IQ9ٴ(!Kh6Tjw`Rn[TmnWxYFiDO Ksع}˷v}m~g Y#֥yQZbrq 2mӈj) זq%JpW^ugvT/V_fļ37?Vp ttb dZ[!YnJ˞g}JCt7Dِ½v+"" =l>XvUd |=~f(p sx0~l4<2]KB Zt H*=RUv(Ccz.6 ջ*5k 6[2G$Psа'}Gf%f?:tQʯ?Xz]lQr/z?IJgLkmfE]2$âMn}kT6]H%+>GdV`$t.p.3GE%Dn9};omҊqW=0 I#͘ [R, sظzzWU_iȪ 67;er=m;,9Nq^a~c_=Q|>cv̪Y[0? hg>|Ь_d!pFa;If ń2l6oQ sC!m\q$:>VAL2%  ='@Ǩh$))P4{rN9eK @B MD 29#DQB#zA ⭀N;?8ea3h}ẘ0k&SڈU e"K$_*}H0-0 = 5*t'ޠ5)ैx @,3*siŽ~)yӲ94-JwԾ-t1LX8Jۂ;mvVIꗯ-ͲRܹ"wxX8V1bS ɒ~X\<+bnLo=Ez}rL׫~K/-־<DћOk&a$+eB:)\8Vyfx29>,F1H 5On xlNQRa$m"l `xvrݤ߯Gȝ"7YןLͣ7txq_8å`v t-2Ō]:-3S.d$ 4} \\EnW+W1@[ls!_;cx`aUOieĠx^ &~,űu191wqFԓ.9QZrC"2eJuQr,}!yz/k_:@9+p{Qaбyw+\-c.b"1e HxVɬVyzO;h0󋚯O4{Q=1ns {Ogm ZgZG@L!cei%4jw)䈊c!5RZx|\$O(Q@ɕNJ@4GC׏Qj+psC zsAuJ6ÅCyR5n$仮bO =w(I9x'^!w"7 %hfU@fzӼӟ^TT?#V}}2=kI0{} xD#gY5q 8X8}q cpjڋ5C sVsҗދM+%MC4V3fynSu 7ѩw پ@=YTÖiG\o(||^,hʪ&l?WLư^ 2L V*_m@|LaNǣ>=]ڀۖx飿?rOtM-B'i|;۴/yc!pϹDqt=U BِyӋdRrR"%sBGlTJK?x4VI$W8\ZQDq(H"Q̙$0I&e66@H$UIRN#nFiN6'HW$%LR3S#]7)5'tPR=%^KUx}ْL!g-yxhh )Mn,0ɢ1.eP*ȴ 4=B&Mh`)bcx8D` $a#UHqY>KY%FIemcy)J/2D}49Md`F:)*d]!#\mvCP5]՘4Aq:YaKfLBgeV{8B%ɣhYx}$]Ub$~4\!#责A40RA6'y(7$j%r"StS}HʥDN(ʜȧ(LэQ@4p98QCZrv-O? yd73vFfo˖l˖EfD97\{LL&ԎK@G [MGZ "!j_TPR恻b݆{#qIqIwӟn"JkDˮ|p)DA–/@+_TZ2n /Wlb[&EΎ gP14v&w=k,piՌB8{EumL]&a֋E+7S^L[o&2z֖3ѓ:a$QlxGoE)$ mS&FZP$/F(̑Ѯ4#ބ8O10sNZb]EŽ@$\(s$u &b l(˭,,I{InC"RgTY^=&\nUY>XٙUcڏxUj{t9466!+9x;GivDѵ J6޼jmzJ{/}ytpNڞt_Sq9Sq9wO7^&dۖ4y~7<;DV%"(GEU} M ;BU=[[2<.+uG,=y|>Œ)K=F|cE()Dw?{WDu>,f$qh5 GnAYegg>X#:2qdDFTAA@ 9jܵeB ZZSC9L|+r9K$c93eYB.,Zz 9/@pN|1G8Fee6J:9l e#b?i #JI0gS VE4=dg#l1DZ/$򝆟e!G"rk@hEiAp>%5y;@HJӥS.`y3w]~}cM>v kob=uHb&|#PM p"1< .d.W7g㚠m@/0U9n0B1:T1Ÿ a-Thp1iPN,?I@Ag1"(HCyvlւ@rqwҖC},>swmǠD| A-c.`"r)TX[֔DӌJ5xv{ON#/Hn7 p#g&$;VR nn}3҈S~-Hn@iH+FN;#BiVs$H$ `Iw+Vy/X7 ]XMc?'&䛌Ԡ(Knմ=4JXH?FL ng[uUs[67]*"}mZ7ÿ+۬#2L~MɃÒ7GLcvE'Z>N4mDFNt->`''DXNW܏ nbd5:| (=mOWBS4!Lx# -0؉$CI#C F2T' ;̫Rd1|LqgV(<W?m}%`9?ǨMl;1z^^kjGsӫ_RF\y1 Pچ9Q@0^> F*!Ʌjfw 2mٖ[jmN7< dRsx S7>[)nflnYyZJNW&Hu?ɐ: f2P->M;Ou'D* G5QkcvXO];ɬO'U4u.ȝf,Wetn>[9 ځ[+1gXo/гy!k?wœ twt]1-7^~<}6_q<ǥbrc32Q:?U?}{@ߝ=vx;ŋq ʹ?}\~y>㘔<;?JTF /is_bBYx ?_=Ě^%wwQq:g ~4< & EYL|hhǾEf,\ցׯw_@J"V)@^/>^L 5=2Ay!oq4PI-+zXyfl{LEm+I֓s~cUXp$hytƐ0Vfsm17vZ7Z$paEX`y$xR,aX|quf&|-᪢rCrG탕{XI택%r2MUD6C~"`?!' .Khv3߾} `Kخdן]]/bbOvɮ?_-vМ3lB±]_gfRd% Yؒ)JzHs~H#HΤAp>~quz:t[J%:-cM,eֹ.Tw0۲{'ܥprNnm9-[ݖ /-^K#*p1,Ggñ5!:kD xgaI iN# mItVn*?)HH [p IGhδ*' c,,\W d0FBQsF1NsLr$FzƔIWAU7}޼&^ 52"׊b(qnT#O !Sj@PZMCvfX'گ/_>B8A3[ؼ(0M1ʃRJr7Xb|m(kǣF{5 +VZx#9VALy!ĦD!zc, y.9F9r);Ez,'VX93 0TD锯3Vd d1-2>p$$K ᭦nDsOk4%>xe`Fa Xz(4aRInԳ4ȇ̼9(;[VXRpgERrEzʽ֌iX(S/EzԅJc LH)]W1\D!m͙0\"S+Mv.VR4>2$jM(b]faHjCk($s}[!!0CYf950"#9Vz!ьb/rOɭ6{'鈳,ìišbDzϛ:'EE`[X" Ń-_.9Pܯ O,[B2x h3N$=nk9k.@(~}q}}8َVB,B {f8;y얻,}!pMAWϞoKG5P\<}2?9e01}i!"2i?cȺǓ錿^Ap\Pm!)VVMf8eY)pml3`RM*l+զH2D^,s{*|6"D%<ɓbC4d-{Kӵ*ؠjñXK`OxX ,EOlA*rT'}ED>~敟dҽĄ:4$@R?#@cܐ73 ; a1dI꽒 zNAI3RT=eZx0k. tY˄fƓ4h>Y(?vQ5{)P^6}.Q޵ jtֲϟpqo3*y샖mx8GҶiQW"c5y41Fk݆bdZn)gWϡͺ*$@P˖M˦{GV<V$ Um\% ޭv#l„)wX$E7Lڝ")R;qک'$|IG$kOR=OBs*x)CܲŚN6Qm$>fJGE1Q$c)AA*"4* 9I-cs׳\꠨9p;눾bD'Ȉc^80CN (WXτX(gUHrG%q@m#(\m 1\ %x]ZF9QL^r S%>).ÀRDXp,1tUl-̣؅IxБZ`d5tDkSF%Ti1t0is8@LTV9p@f_ FYms S"J1PKie*0 06*S 35AQ`V-k~\5=7A.9=Ȋ p$zP@EJ.5MTmp)R˷Ө X T= ɹT ջ6[5pP-58M\`Y5(9GMHm 0ld |Ww6 XU7[(ɁXmq WY'@Ls&w/f)0way< KW(:sƍ [ %W 4ILZ,/~[,kA@јXN<%Q˜ۯoՒ(sBm爕/X"˒@$ɖm8ZoP܌anY: dގΎeSv=G6@IҔ,dk;yL;yEՙ%16y3߱Uھom&rBtZBڛAX}֋Ewfü{u%ٌf !2!h tI% ZQoX$*J Dcf|SOTϻ]Q.}qAƒ0ꅣ1Q㼩5ʧ$"vЮH=vCi}[#b[o$*-m:@VW*$jLFeA4!R>;?!du>n=:?]m@*w;Tѵ N+v'Ѭ{^"88O8 $b ґd/QDh.s-H FS߈9.pʹVH<%`%AER3Ixefvcûyx4H Gr3AIgsTKnQŁpNs% " 0%rhH5Zof@NnuOB= ;X jt8Qx`8mZ[[nӢ?%{u-<{m&i[-Y<xwx_F|W_W\7}_xx7ѯ#|<; ^>_ݜѻ~ʢ\]0.O\$Sn;9~c{iF嶚rz$gxAvP\L7+T2~>Rj͵~72}oy%eR5?ll>NY%9=z(q=j;"ύ[Oxtw_`C|v>1[tcG>Gy/quˇK[\(ˍYg+-o2/6-mcywW:=*%l4Qf"\<[$pʑђ@ Ai6ilAO( 2$S?~yn?%l#;{[mlYwP{ܶ˫6;~8r+*Ҹ sg$GM&/}Cp K Ac!vSq3;3={`3A=ȄZkt9;g?.їcnҞxi=rDL-Ec4(Zz8JR2{@9D&5;Wɧ_ cل$FsH·H94(,iy~d8eZF"≒db fRH**gwoRjy:)S3vPZībf[\2;t\zrv|I1n;FvqmMwZ۪(۪83$DHԯi(jq⹾1q+^+ Wػc_w*0]WԶNͦf_T^٢HGq*WX** }OYؕ*hMՈ"jEF}KuFzk/8zޤՓ[U(3u[qWHsWu":ieJ6 U93I2pairsppLb7Ibɓwu99XKNe9%']丘X,Pbɦu*J'YyvK>fe֠B-:P *w(jEZR*{h:[H4\d2y)M9uĉ1"Jg}#Vw\M]Au fwyUR )7t 'T8! ୶E#%,$"7 5G575F^;G}ϟ?䕔~` A|ʘarb{ AǸrC!Mg=f8 ֪>ƻ}MwjݚZ݆@;h 4'qz7Ehg7wkA tZQf\Eid#{恖z!!S4Ӹܭpu=_n_>Λ[Ws+w˼Pts~p=洋**wGS}P4p/yLrI}䌙 ,^XE\p9RkG嗸avTZ;T0ثQN. (k+7L)%U%|gPMR@;hM s ޭU%i}Fw1|Dڻz!!ڄ)CO>dnCӻ5:Ϩn-n-n C? &Y\TB|k2`K.QW$f>_€!FSvnR*OZ8xz3[ gej1_̚GѹtsnKhQ&rURŧ?K,H"gZHXH,&O3ʣt:5ofZB,Z䳷Ԝpى>XVXőg˓<>kœ%yTYKjs:%V4pJՠTMU>@``9%l\4ټ΍8-RnSSO5ũ)'a1JM(P*gbҐ]]9Kۂ<%pC$?ӞЙ g<lAYK3j[̡M\>l߯(<!-w3$:(*jmMՄu[ؔ!MW ;:(N3jی;9 g:*W@Ky)1'٘?H&}FjF#2(Ո3AپvoeK췒;c`<0ڥ)~ f;-Ҧ`kd (#DFSI.HK'Mhie'D1Fqlj6J/ 'Bmd!j_fA"#llWPxg'; ~=_.||J?ODǏOQ &O⠮DGq*[muDK`NMSkcNXKsi& |1asV<,P j J CVֽ#d*ŽFOnVoM tjW`_;f61|o=tTsi/!ڄ)O~&nO٪>ƻMt'-nC C S#}Ʃ8m:("$gf²-/+ڎwG.֤G<ܘ:0%w<%H:5O㤪6yXro=G(JK ݆JVv+1[ak!/ Xޯ>p6^l:8,gj-(r!H%t#$7$(VLiz)M1:$xZsa{ ݛ}&|Y!}EYg,%xE2N4~x5 LZT;=*~\[ۦ ɒQy.Ir'(йX'5o\pe-HBifLqus2.֧~2yQZF\ ޡ2Qq D AbA A_ފh[ ,9",_`HNJ{˕źPoz6)HsK31 Sn fSKfJy= o >I*;Q~7 2ڿR# -342VzEK?{6l;Xøsy3ۘLf>]D%ƸNEmˏnI^I n[YYe퍽IWHn#|v50I2}w{zpL-,k ߋ!h*' XRv˒IDڻaۋ;jZ-.}шulΗ(Mp˃z^P|NP"kPCGtR ɚDRҲ'(ѪYFJ8Niq5\u,ɹϖQۥrb]"/>r[jOy95gj9F~"Ssyk.&Z)mb욒ZG쒔ZGu:F AuBUIK`tIK`_"v9c o^pC)1AfS>qCYBEգ~4ʻ |)W;R{*`5m[%x M7\"T#g2[cHb\ꧮDPׯevf[)kg]zpof{%qirW.p~xjUL 5$hV1,;;>ߧŽ֭zk_]Yt]Ze[ `0/))VRbꁑ&TgXTVFX>snLb]NO϶+cYM?h6Wt ?fMWbyF(8gpAY9X>Tp?-Ekb_-ccĕbVHc#{!mk_G~֊'kdHQ_%G"cAF "KBiB(|d201B9UI1 $y'dշHW짷mh2Ta q MŌ{{Ssy7~1*IRSUn}s(szq9]ٍ>|r.? Lr5Hfֺmm*\%Q~Z$+Wmb >ָju_\}WR€]=8:ҖW"$`.T1c.BKˇpl't4^'iZ?|UѮx荱RV" {K4@Q~/u;ly eݜd?~]4Gß<ԟICgF_2`'{"W]`,l8*zrwg$pzg#J] z|rt|0 gt@m:% bc*Ha!HX* f]M}+>_YnZ#5"6X5!ƌZXIcecDLah(СE6$C1G:Ft$}lr+ VJ.9Hh("1 Q8EAMz Li큨{2_cb9Sp΃ec)І| <7+{0+Ce? $d88Ͳ7~K3 87ʡ3tiYy|9.~  o'8vl^,6}8ϲ9֠{8ةdlx9<~5KA K.-+e gfzT.6CC*\T!E.4qr+>V77A)}enm@.ww/`UQ$`7jJJa^+H:2s( ϴQyg5*<`/,?ޞ>P O 3vVgkə!Cߍ4kyT)TPwŏlpu [q驐4n0|o?9{i]gws@?[Nw1ʟJamvZ[4f|LD+1+}* : ή"Ċ]&M*qMt h%e*VT,*X~;pF&""Oe+'lc_<3 Vs. TsIkN;Sl6L1&r@1Qk$]O{vCYꉚ :r3;7w>{q^kIe1kx\3{-c#h&dVK:A :<`3T  )GfއN^9 Qw17IQ}X"̲;櫗o$ze.lgzEB\t2=q6I?Jǽ ҈sN1tw< 0x y/fQ}ѐo\EtޯȏX^X7_ ʨoU[w^4Whi-hYV|*SR'9n\>X7_ ʨoU[w~T֬e[򍫨Ny}ȺIE/eTǷ*֭xpYL-kАo\Et?yl82F]q6mݼ-kАo\Et޿ 9nJzݡuŠQźu~K\Aভe[N:%ƚi=*6;vN8uz6iz1/ͅX=W jlE B8Zj$|Jv>)P !%ck[p$/[rEP9CK.1K-.@Iysku0~\NWvs/7S+Y1OVe?Ȳ5OhWfrWliưZ1e.C]uX%7u>_FY7'ُml6w?kr=)v^2)qNçׯKß7?YN ǻ;w-~~M7]aFmUvZ菵t+  У0|!\LtLE),8dR2Ұ`N@YA9"LfY[,m֍ CWYIYw7--Y}wiYKZ6s\&?͗7f6LeqϞSlqk i}ň ۖzr$ fy;[ƛźp04k4o0345g(A}*SaW?rbjrR.P(P֪Tb x=|e:mcmޝN `(Wq:ߥo6O*:`53}e%p) ҩ}zڽ[ԇqM:cghtgj@\ե]ՃubOw1:(Q |uvrcJ[}}ۧ{f}4{[7=jl)߼rnG%KԺ'vrH;>B/cɝ∆id(4211İ8B8e!SDb"7'ɦ.m#YA/gsb`A'y9 X,JNjSMRRK$ޚ3޵a3E~U]]]U]]'LLmÚm_rPtc_E40 L/.J:ZMkzFH D"{{D,$PAذ @J'2AgCc&(K,hUk/VMn %xDS2܌ɗ'*#= z&" #-j݂"^H_ iBi:h XunSEg`l,Qz%1sl.t˧afIz]` YZc;}#apf7WY61B@GS:L`zeV dPpq UʻH'dĻ}E yz{н#zpf#G;`2+?_~~5$U.6unSp$mf&|!;@eԤ], ~ j aVJhĤ~Lz氪`Fv˃}=Oo~7fnÖ_"7!3ۗ̿~}F@R3"G6X|jaQojtʼnw^Mঁ X|JfC 5]l& Wj{b|l'eq6{*ʚ;[=Ϊ1P@VKWDH2߃:FuҬYz$N] ZV1߂1om=Cj-&bpq?~RɧD"^J@hYpQ=ŁvZ¹9^-3{A-ctw"b[.zRIÎCb c̅:h#'[6gj%HkI=?]~_ꘙƘckmS7N7[s_cϝ%s,NU ȕouL wnϒwL>2UfL5ڑsA;nvt&\MF77 3ZHIbInOm7T[ baDp(HH$h67>XɄ ^"i\:T`~:j,E4gr %x0©]HWN:$#aB|J`-+KBY׼\ʨ>w<|[znٙБZDp٣~̴Ġ^ +~sc ,`q *tKjfi|;TP%~- E8Njn8_%CFd-si|=+91sΰHhȻD7ǫr+pu<RUT(x$pw$\][kya_@-f>Kg惊CaQ1mߧhۺl~?mN#Fa^YՈ`uZǙ[ruvP8}x0j-tvr?Кr_1׬[7X1Rk_%Z is*T/7W3a )о|錆a=\H?0E]PBS;#r얨wroTڻ&%-Ǚ[gzi<^ߘ*9tP58r2[.V뒴xJhkug̐I>)Z~97_3[y(ѓ6dGdp_8kwRؔk8:}wvE/+ݠs#2`<=Ҟts^G>Kf2&w7HN;Ng.sf%U!{cz40ᐑu#uG[a> [m\p‘on<~>?~~2ϯob'ѯf;[ q2ڞ؄Wnyt߆@sz>Pij$ O2B욭DUQ.^b>D8 :[KlM~O&W6oFWbj3 ynO[ hĊ3Ȝ)tϥPAkd9Me+Ѱ_u[ڠN߿ɞ:SśԒcAmp5mj-E=Ä́*Νǫp jn\ǎ4f\9hhWQY$a0kßO t|~hN`cGQt]=|IONV T0ǘeTJ:_i蚀TOLtqK(}G Z)&T>RW(}Kٷe f~-WS @"Wng̓)a 1!WlU/A OԒ@5@LwAu ug/ʭ\԰o*RM[If Q3C GXʎ]=^j t5ZQH]ނ)gjK7xv=tbU274Q,\aNfX\}zLvFFDs(LQ| #36'q%OH^QLY1u<f%^SuB0a})zZ3aE4l֦5+:~$Q|zMD'w6'RL<6:)^c!s~M:rjb¯<ђS9K>2VpZ5lN ܤW4$ug9#}R'˵5fґʼn-+16m/:Ac }vzC܇5)D觴ڑS ėA!R3[MTm%s _X,! ǦN]TbE(T{<,mL _o{mZ5.ܳk;֋ԭ[q>K4'@bA=8i-9^ʱY\0v2Kn_#kwq:9E\Cu>k3Yu[bJƜj e^{Z4RrC4]ӆ;sqXnΐBϛ͸_;qRɩp՜.ʗ[$qUx`;0k@,YqCBN:4z.?>7ȃZ*Uj$8Z܉ȴ]Ah(!>fR2QB7O&aJaJfȓZ945k@Rjslׄ\-({uph-hL`-G͵Su 'SVK%D>NHOfޛ%QW zNJS؍MOO=@.>wzԲN%ӽiOz7q 9g.7b*ꀎKr0:ף@9uh7WK4xw 1&[3? 8z 1Gٻm%W4~993~isL!AV#qNHIu/$JX{!8 Jb T75GpBZ~n5 Z#~ hdC1$zrZ78Y:`HhEW,`+#H?>ۨ`JJ qCFB9 2yY' +#Z[ E*EY6D >~͓Q0ӛa9'"q,@:H3 yLu"XPR=LC^ZT N*A!zAj83~ǔ8p^K HZduY:`~$~Y0?3iT3 B^ ,!7xAyd?lI(cJp"&6&!za%4Y…nm[2ϯWɯy]} 9:T?/w),H9 d"-OU1Mt΅Ѷ{BLZA}|xs@ͳ:[@8*Ig@C+PHfCp]uxJyՇpɬa\282lzG!b6I܎1?~%1.F9W%+Ɏ5و,:[@ kˍ#Ζ!]sJ,,z@)NZ0}Pg&DzNc-Lߓ],:O@5 dŎ>T2\Q1#}|qvzKnkJJAonI󅟀-yhpv}m[o=Co=^z}s_uMÈ'xMCݴ7D AwzԸ+xuD j^${@5@KzG'qFS`HbtoE F4K Ȫj a !ĽAe7VG#ouhqxı׾nYԄJֵ72=qy;js漋(h 5)i.V OA”V:He%1LH0>: 0r6Xp8-t˝^mT}G=2cH@ ŵj M`˶^'@Om =Pisi,t"k-3h`c胁&7?ϔ5Xz(b;bRs;N%PvEp&YiI$pj-[`,<έyfu<*~;q/mķN\l}"t&w|Yoܹѿ5 :PiŅƅo|7Wҩ?wWev)k_`5ۋSb(0!",`F $LB3!F2 PJD^]]TgmYk3{C BV jb3$> ŎO,F_d1%,>PB_]QECb@ۚB;v3[kOJGfLgІqZ7I|0w^0-|]pJ* f3Mv/ݝX$a~ ^GPp <湒E-FgH.+-́\6Yp~N9 X8S*Sæ(X 9ܜ`Br ST*bN%ihXJE &$XH$<m8Xv$57KCJa #m yb6f̟&a\;·:Ҋa\v)A7Zש+󞡠ʊl=Ko}Uo c|;vM#ԟLqvli?~fG(0z zs|~bky_DS9z@`!oE/Lg=}8ut_w`vB}j[,Py~ Vr>S Xj__Ӱ:DO'O{6;C* $[N^put`c=6gy`[pmaM{=Ejs 9}sGvn٘=쵦&?Y,/}Lqu(<"iJ >kGKKriH:/F1oc_3sLz) )?L<`@0h;w _!h{9 ;Do6}DpGp x4Ȳml\i;o#'OqPlv:g3-ٛLۤkdۛ?xӸ{=-q^Tg83PlTMG*V߸z~FBk l{B@GI%n$dܠJ0U*#XsIMbr*{(%`I^LDa@"%kJ%\Qh5HA$'`)R XZM>n&QdJ&,hVL(輓,|m]G3)b"vD*K6)4V!G1LT$TN)MDb '45ej5J:dw vUS]!ެSmʦbgggyوEZMoQڑS=N/e25&V׽NԴK?S<cgύ-C{m~qM>ILOF1r CWI2|6?U_c aɊذ >a78Ի8_Ki{no>{d4.[P ͐5,uWZ/tcf#;ZQch3˛~ę t*rղ \LQ1ʬGS[[-gW&f?|֘SD( r2 e\斚闦nF*aWL7zF{z}w\<؋ӲIvAt: /j0=0nbw]SGiI@Ў)0s_43mi$ap,ژѿȨVyx%;$`i/ch3O6 Z4Qnu2Tr;JFy'LZY-wQ2- YN3&<2}lSn6['| pSخP_o)5Diʃ/e-`^-}J[@mRfU ̥ JDi()K8嬥y;k%Y"pbKq~} rjќ֟d =4أ;Q{ejp樹:ɾe@: HYM ~[+pzB&wۯ~USGut GH8N֢7%AzE߫yp1 (^A24PJ"cj 9%\hƲN3Gw0Afǧ{Қ QHB*@0z9 Ufb%&Beĭ}hdlӜK?LH:1Z<]뤖6|-j|>%qn?I0'mbxZG0oyN{QyқA'o4 dl(Zv 3&=a͛`PuqVlݮ۫ADOTv["0O{$OnVn)}cP.,Ŗ! 9tIAa+u&h7Ef$vpoRR}M-pE`zcg*@]^fɲr(l8PLrlܚe,fٸ`H (O SICB~NBRB(I)S z3ܼ_ޘBPYB^݁ziD+w-JKrPk9NuևPp..Ӗce@p(F*Q@ E$ EA"=#Iߎ Cmn].#Wigr>HAZ޵#"m1+K?$bg7Arfa(l,Of8Tdui-v:#&+Ȫ6m<*0bђl4uk߼hSgE5&62$$ R8}dJQ0Q#b@L);Yh6eXȵUyr0T (*(U·4*8t;#/H5yۗHzAĀ.=z'n6Ғʷ,2!һcdc{q K&o[2+G;QӒy; q 35Ѿϫr8Ԙy?_Tv2 ;GCh:0HdC@;G xgB7oUDiW">SDL8Ą&! HsI;Hv*1#=kN2 ~. gFf8J+9FؠXR@ Vy*D;~ f6fE/_mpKlHٗ$/-Ik?_~5}?su-@R mӃS@Nugէ@IFF=Zx ѵTE"NY׈{QFxS-*`jA9^mx(E=` 䃣zٮa6BŗM`(F>!tT‡niNnG7s/zޛ'֯y^S:[dSt)HNЩp 2:+&Mh$OOqmB!">1=ًX\ 1Rj?H~EA !hǴV/;Gm%)WÆEOE0eѓ (A/ m,˪6W6ڢHu\`| EKOCH8jgr,0W1`I&J}QQRˏVo%fEaKɕNQa*53lWPu%΁|L^?V[kK ! ՝n~xw o)v۽+'(`vt3TгUÑ6.5OjjW>/dQuz3c/pDR2J`dڃ(q`j(JInE|*IXyMFWjqHTVo?TOER+VunJE(DE %f[Aq"`TDĸLJIAöJ%*:DW/-vY΄R`wzs@q`s'ހ v=횻Gk5w?jV)"6-0#0_/.ѯOsL j+XhnG-J˜w rLn?/2EռG.ޗËeH2CBq1~DSTzp51bC,'VZ4AJ`!z/} Bf4X`gZ J[$*t؟FֈM_\oz+ rQa@C<%?~98~Fwh>N?Zީ(F#$G5Frp ^N`F:!ԄC(|QăON5;ıTu&7tgEg|V9jƪG5ZMBxA'ڂr@& "EH5}P-cJmZ:1r 8^r*JIe !HG֤ﵐ'R[%|>јF][Qp4Xl>aTMdbf3Yr" " 6;fH};4z%4hnXhi-̶A`L(s5h' Dh JvkMA!X$bbۃD ܑfƁ5)H7 EdGdd)Fkqcמ)TI͸sD*=aq"S"Xp&:Fp&UQ 92ӍV"BBa P A02:ўPK]K vyPxG΍{DSQӍ+Ǝn uWqMvE,Iuae*k3jL̤rkV]!XMx48xtL/iH:9:aXBUp漋8϶wV,X̢ZYKbŻߴNj\ǻH#>̻-̨PQo{Gq7Zo(/6W[zGqHo˶)e3!ܩ@Z )=ݝpW q,=hHʶ>?s4y[ \ Op?D1+N_I®yǀE4!9|LBzyr1H#]cǫ|]ceQߍ[b*[v`ˍ0mFl*ܿ&([L<(]spPsdz8S d0M/)g೟RTsM?ߌn(ܽx.WϡFT  ].늖5l㿏'+;M@k B̓RJ.,yh5 Lxo(%EgHǏ*3ȝ AzGL152[TQ\̌jQh&Cu$ov9S1*sC?/Tywp0? (~櫷Xy:`6"iVFl+c&9 +} $)93cPRHgc\rttBx  aHSFͤ3ʽRo ԺEKD baK,YErΘ1ILXJ&88zN2kgPCDXQF+CduI bMiI9 R!s1s30k4z^y +v a@xO?iР(`n(-<Tk9=8)ΛL&M@Bf6?DjF7\Xi%x30L[é%L'Q*Mt,(?5^;n8[μ :h%uQDKxD)ּ#r ࿕hOUfbVD4x)4k 4$G\߾ ,eԼ!#\9KJ\EisE[kn[:I|_g,^\IH}\ʧ+bNk+f{_tKdBj1ds"N=z,9HZ1D Z IbxM'1ʨz4wͿtdKj1`!^s;@}%" ӕ[ѝ@0Ҫڀffzm@Z*_%kƨ{ZYki1jcR:kk *QzfaDL#[AxSJg9;j9eQߍx}2\[uKnQI*!V R|Nox)VGQSvٺwTwSV=ƍr H1ꨏ *=+ў"w'j/#^i O0d /qm]AIڳFBfR7nqrr!*/ +r17Kf-|9WG~iIy(~S/pS럿%o<|\iLx]~trv S1AxR(NwU׋1OʯBʯgomgnV+:oՄ/;*ЙG_w\qBJ~0]H_ךjzf%\qCQ,g[Eʃ֌ife, Bk9ʐbDnlL'R#5=̒"ӗqT$7KXs-jPXaeQ kV0IB|P2(2Ji){|(m4#NCD5SP~Ci kfNשx%tyylj]ѭ[^:Ч(պh^\F{M45FFHZQ"x j9BH5!Xr7QTD\͑ Xo4/JGǢw,Gu2GB}(*9' .i>Q5H,{#3 jaŽ:F s)rU2(k"ڪ(e0GZS%Co0ZRK#bf4ː7QXC0:l$.sM.lQ`[:r"6{/vqMTߌ2TGuZz\&Ugt`UI)cNPZ :#^L6?odm62'Mv_n/g|\e1vG0c 0;EѴ{4&5 /J]?zΑ=E]آ0{,H~SK?#sw7x=+ a[5)| !ICIHDz.,Sc8&")LXE6!9ҶU‚t$=aZ% /nVY%!쁄 zLSz_ b,lSϔ`sٻ6cWP|ɉU|HI+uʲ*"Ae.,@b,vI"$83__gzzZEнҦg61 !=jSK8KզpFK'}I;3r09JK4v@<˿"C9[u坰kۣfRј#1I)&jS4kGf+ HPUap*/7fX(}[Yl O]6.?g($5RxG.T=/N !8m)9-W*RBd"fߔ|eֈ#BUGDC"&sX26_+%'a,/T't&1u ?vB}&MD',ގ; Pw[i"m_-|7\rd~1%lj+L6"Hx58CF 6 K%D͇3Yx #Tk:YCcނ^(g!Ӵ7 IL/{x텙)P/ffnRJHd I7' ,1f|:ԧId;@ L|^yc sC*#>gIі+ ~C;Ձ?F|Ǚ> S.q,Ԃwœ)HE'/dw3j,([i c-chHE:u{ltF[2\U,x \Lt7zw8R{3 VQqhD<̔q\Y, D=P'1B30(E o}8GX6.Ι0SyvlO,uߜ$5O{;H5Sn?PcY8eŦQt.D*eMt1'k_mFiAEq!@vmRO_f@.b8R`4R8 \c"|>֒ ze1bSؘdH*@z<$<$}F' n h`a V*eZD$A;NG_!l#zXoj,|bcf6"H{hN'6=i цHIfJzy)$qՀ\SPE랡? _@nJo& D1u麅/Ox?6 St7y'b)a]L &j@+fBY n E캣쪜/R)w{i"9^=NjcvT+&DDJbN -6 L$"pQDٷ#NvDɷf&"EԵ(ҥ{6:, t3j1SH7 +$YxX^!勛ikJֵ^ ZT#n+<9QŦ 5`E0\zޣ4w(|fvos鞿ŋ[8-nsG L{wۻVO8bRf8 (.WZO?9{v|ܺI$a)ݻ: @ї 'L67Q GJ\pngr<]hC% W0|8Tr:*s۵cJ:orB)?,',g,)cHEX.K5,tF@xh@"ãFܣH珰G=!<302rõ˔6QXt`LpCDCg6 xk x4bMGk('2dx:<$d0'_ ʀ4tra|CtT[ fxZ\qqHPXttkvQ\irxT\ؘHs,y@ڳFldv1'J^IѨã)o2V4!HBH!0y4Eł adiMqqzUK.06\BD*?> !?I^vQ1 3]L㐻 ľb U_(mpۈ )y[ T."*55N_@,ј4!c=&mԖ0pO p < )Nc%>`d觖3,8%V=C\9,#*|B+3"`t#6RT㐬l6jX JCGs]TF#8CdsT5RQ"*+ $D̜ .2kVGqSeGe#![3 OԋĂk$a04l5*'aň 2idYedy2DJPnc 4q,Eز \ x3c7(0X0EhZ)Et˙RuiMW5foé>.3# ?TiYO=k QE8]Q_ᕦtlxLvtpztj-K7>u?!1|Ξ#lt #PTP[9h.'لb]ܵ?`$ -.&2RKBv] #;l 2=(D1wBf)s=';Q/xw">\;3PE WU&/H:y^);(X&rk01r.wze%,=T7zT3nøS&Rpw@*)vz׷ӛɷar:9;=Ő3!)̮|r3MMo(@?N>Q\^32J{{!!y_˂;,{ WeaEJ'=#C92' y$Ɣzܿ>ྺa(慄`mH,͔#~T.֧pu5 W{MX6SeY!h&M|ߖԣbZoM' MU{*f{_7%}$RJg 8.w7`},+6̱)(3+%sf[{_rfJ(a砙˛}_<W~;<#7)i&r2 Itm;]1j #Ȏ}YUPZntt\mPr~2GxZGqv/cj 9:˜bl6p&aS:M|"Rޤb8G*W*IZ)f4&t-"oP*+"ĵ[Q^a Tqcшm޳7cTx"[1QI@Ic )1v.WRxK2yΪ25VIW>^4:<&gwMXL5Żv^@W=YNq1 d޿81_G볃pip e4pfD5oއ}4Zu~`4OFvi&ITir[-j-OlOVJ0kdmowe<=WZE2u N30;2ۗO_n`fz !'n̉:Nuc4cK=h=íX0];L_><%(ҟ&-ScG(,}W婸rXS9=wX\ =9WԀ$LвcC ? ֦l@|4U [|g..uWgFP~xP\W*쩤{A@K1DD}Q2HPt Q Ar紟Wz35p}u0&lf~3 ʫ]RMlimp Vbɕp̈_~.걒Bv:Zy֘O걀ڋg [n'%ч] V qMIOYD(aK1q0p=W ZӉ6v'9/| ZA ezyAzbqcd)Ȓ'C!1Ʈcd2cblsqb)= _ɔPIߙL,C0ob rb>&/ ! ^q*JQ|-W=w/rws[F>bac΀Ydd!5 fRB%nC Z6ݛ?}92 CAT#_PಀaRMMiMٞc6ui…dh?M3 M)4vH QXGbLoLC-6qW3+.9ޕ%$z${80(c,6Bbwd;)ן¨e[eFz,B( t8iYᖇ~Xqˎ`Ʒi[7 ōʎn/|vim!hUhCX3((}2( 7?Sy=ޘeJl>&0^(bTAKZ(@$"lgkqFEK6/],dsdeDIغLKjyn=Gυ"_UdJRAPGbAv/\lޕq cRgm.LQH(K)ht%V1.GMH)dK.FBKᄴb͌rvqiZe´T]沫t928! 2C.YCh485 m"%0Ƭס{:q2z4є%t49!ɁG#IMyBD &H 4G`"$K yGa2.5HԛXJ'*aEnbNOq<&Bܼ^xƍ)uzh../VX 2&b+yq\SA9%B af4uyp|. B-^( .鴠驶s`2|+#ժM7qģJ/ڹ}q̪ \>xau jKm!oY7[,lݿab#"2?pƳbe1~X,WC-Ί",P̽/m 'h -Z2$ëpGi1h8M }qO+2%X ĉ*<,N(6+7߾izJi\:LrwM)c}הG)NU]-c\>tGZIӈ$"BF qIhWGxvhzHWG'4) T6 υl1I W\ucL/b'u>XYPs,Pej2ZU`qIdcyW #q}lͬ;0)͋+GtEjk|4JelØmۇ ̣10PYYMY8ꚣf4|0Zf$Xvn=Aӳ4y gVY,0AϦ{ZO따:YsP*TSpM7P<緝BUcXNt) ^27#/Uׄ'GD+RJlps cZc!9uBc̩q$(-(QDXrsLQzQ[+ȯqpPe^ASX#]pµ*4߮ fiER_΋E :y}a0=_GyZ5Dfg^ >^X$۫['ۍTF塢T  IO]-h4Tp+JhÒP磝 ^)MfV_GCp=cڙ!>25 ncTrpWk2VZ J\Bӭuœ;D4GL6MVњq-twvocsZpQ45YC>k.x]<>|FRZ"T܀3*25`I*i/jS)c&}+j,N.gL< 1q2f4=X[nsDE Uxky8~1Hs/>q38;A|%8zJXzs`fq,3]7]K[}rbPg'h&>Fo߮T0bEHS( W=b6}/"'cN: `Čl;z߀bƹio0C 8B_t=O}nޜwV:j5u^}] _򒥔m*˜OP^F)@emTVJ-,ERei w9QTY{ZQsvE+ALj݉*Eu>;0..Fatцv=ER֝*未GRat֊+e+n ( qg4{a(GS*)B_zW]EFA;uYAWU:feay MH_,ޫR CvYŀ]hE*Ej)"Хѱq).{f~~[.dhu=X[k7}0o{wǓ2xL!|zRօ% |_>5_|y,γFo3OLcq7|6np hܔ Un6fR.=Oe@V {lQ{(+$Abj'k+r] x8|1l_-1~J8o<ѢL48}t;P|ag$ Il\-~'h%x4AP/Bp,AzqpS1&irg5 twM$;$o1LH@h` u箮 \`'` OcmtLny(TC\NFGu;.aH"p/ rES),ʱG9rzBY. YN5FLH#"IšbQ"V)(ʱ c&1[8`iFlִ i_.ýD,ƖsOp;xrS3g jkБy#$aVnl"ա );(YGSU#8ȽՕ:EA:UR uq È )7 dJɁ{1< w/Ri&hRVsB)8t\vF՜*ۊ>) o t\]~r? ~ t:Xŏ r쫚sZ!~T@U?PY ' \K8XQEE l'5*~'a1b=zֽ꣚#ޓ |jȞ X:B`GudOm)a_Rpm[c['+GDœX5 <<3_GF<<.^ w 9B_+խ7OL`-M6_ĐX4dv^?̢]1e9}+7g$0pc,u`S/t#|,- 9Uȥ!_KZb_+:>;3+wFX̪RIsKb]$drxάkw0Y>8h7?i8/rFRbZO0yg4Ⱦg۠i/\mnC.2`W\2r.8p*" 0RbPU¿=trgOҚzp'S"r kr3PoSz5rq<-nH;*8G@q,AQ9VJZ kAèhN֐V&K(EZ2 x'E!;,v9XINv$xk17K#KrZD]h|80>@>.y6X/˼%7ܦ_d~#&f<&Mgw~hOQV_[}znXofnXH*x)b{=3(3m1܈l€y;+-G_DrSʚ0m7ݤ(PzC'l[7M!qo8\Ѫd.e&d'9WLŴ3-DeE.]/3J9N*EcJ^9UUWJOBKJŒ+F|gCΣ9w/E>}L^cZv\VP|˫&ڋ$2C3.}M T䇟i;x-p;ш'\jk:$bUa+g/dBMH3Ou|L^WZSqS$~&Ik~ѹ5@c7al 􂽄ًcL)yؗ˜Y6bBg7aڜP Vݤެs}4~&e%2e叓%3;<h1RmwRô_L tIHא.چ|Y>'że ;QɎBTkKM۰60~ڛihvgN}' w۲LkK1v5V\;7*V2[=^ޡlL⤳]oyAG3ǶC]\9BlS/ZFqw@V)t`Ų>Ch Jn~/i^3F}}[K%5t=ܻ_$]5@{Ƒʀ/'R/R8:zbs#:$KSCR\8amQ﫪}0(7 J?k5jnK2zƨ={ݻ>CLeBO—7 .ۮ\+FQ#a=upnLOUn]z=ZG)[ΐ쎗z9J<"#KcAHwl͚Ub䩬YQCU3K_'DnWFWi!I|6i} ߲;{>MAp#x@#vM5+VFYvGo>~z7?}YGd7q+2R3N`I^9gwRp7{ۙ i YߓusAzu3r?*{hO2vΉ?ݞ_֗0^V!nCsCm[q/bnN.Sc&w.ս!QqRJ<[;H~h@*dLH}NUNǪἇLnƻe04ﹾx{Xhx?$y\9̡clL>vmT>6~tml5{z;~/qRuTN|w][){fwI"?r}rRb*?ݫ:0'؞1w{RIJtYi~&kwtz6;t6-zoJ<1Ơ9dOC$m l.-4V4P7ڨk-l 7Nlj?᣷QpH5ŵ EtҭUku_^t&U0l ~74kBoWzj=*,6cglZ& ūZZ7dmi*$MeS`Pܒ{WZC:Zv}XUDV3 lȹo= 1ѭEb}.}Ķs!5$Ʒ0VqoXO՝4dΖI=[;r}M/u#׃w^cHqmգiim ,[>1J0a] R6O]NjY,u HjHI%^6O=ZZM ;I[:}<{k+5ؕ \ER'-hDP{vgmo2R-(H 9*1nS-hJtIK$9!^NܫDl}^4& {zrkM2"zꆹ~L(y]56 C͜C9IkM@\Ez_.8JV{*zrmHzJ!q0dh;3́J#0)rS|*۰)j v %D+tz ozUH/ǩ99WMRa2ZS[T¹xB ~ ![9ɦS6V{g\35Jx0[3XO.>+d.?ϊ~r_l%Š0bb͏j_'1)>/(>/(>/(>/Sb8<.X>Wtq.D[31s*{o%"ܭ$fcrwE<ץќؕR_}%a+Qޠ Q ^IL"T+'r(G\sxK'B `G<ެ Ao!-QjR߈u ?.,ΫT?H(.5߳w좈QrIK?FUZ; : @c!LeȖK+! dy3@@y_kfIek,y*e )n<QnCHKp{J$w+HfP[t6AI;R9 .D-RYE|F9sP27!A&ϑvJ8lX%c <ƀCno >(-Wb"5<㱁FP.0,?pb5T\`7y%uX=>{)Iu0-h(RZa87 "ܘ)Ҁ>2)1 vF @靡 @>.v(QHa J9BXb%(0ê 3F-`iNsŏ6QJQ?VC|nR[K`#փe6}}{ }}ݡyםM=H{A x2;P,q/FV]X)n[p`J* al'WG'׻FeQ~{:ٯw/8X^!z޽`{Ū]~"Z \;,ԟ5<{Ī΍7zQX9ʗ_g'o !ԟ"(_aD)".YYi!azvϚHkbԟ5CHh9'<;:ڕ?kוX'*]owYkťڋ5*UoPS.)tC6.'xXVبu2_uD$ag/,lgW.6n Fcv~Oס13/Eqs7GdrZ(*zZ|+u \KOqeZ({n֍-*;.֭CNJ"3hݚ(Rz/Y7>XX NX9}(Rά[|FukBB\DdJ뽏ݳn!DbgAщc_ʙņ[|ξukBB\DdJн?nDuŠĎu/RD:nU9[r"%S}uc[,UD'v][y[WFkȺUnMHȁhLx[,UD'v][кWVnMHȁhL15C{3ŠĎuvW_;_Ѻ5!!.Q2>nӽșbPcio\v3>SugڀFɔ${_кQbPEtbźWW^pXg-[EքFT݊a֢u>յ@]{!S }]YkR tnT4c{_`keSl@C;fT[٨&p<0kTwuxDh5^0kU7rJ5[ȔfC߂ETӤ&P\wg-eRH Q BYX} JS@Ad}昩šjr)ܨ&0<3$dJ9f5A043Ss17 J9R"SIM9|B1DS$c|B1DV*63Ds17 ;1sME1s^Yt9嘛AV[cZcN9&4AR>(cN9f5A}+n9f;A䘥;3,')ǜr̍jR<ǬLI9F5Adx9fEM9cnT(1+S9嘛^S}昕B(SQMPuGgY#R9嘛Xr̚t("Ǭiݪ>s̚cr)ܜ&(- _)2f_}b&BG_'#P$,dzqq_ÏO)p2/n1Zdn%d8ֆ1がpnL (gwS4C*ʃfbPxg5-)=ಂ!tm-^X.r&s+$qEڗ8p䄤Nx ÇC*`r;hC=\9GRO%P BYk㊥: RSp%0(8՜ aA&B*Ð;Hzaup1Gm`#!b9W):(r"`3F$f s!uK[;8/f @Ȫ1Rfdt̏/&fc7cwGHŇ'GSI'B1E PM΁!3F KV# Sz`!-ӥ3TރC1Gh$@RA(-5r(C|ߓOPۛQ @&.0'foRb}Wfrn~3Lj΋wm]r2?J}޼z}}țߟߏJo,~L|<D#l=~6r?^]\fd՟ ?\tx_; `?]Mp}1Gb)#)7>CbyDZOfi߿~k'Bi"GQ/J"8{}f7l !`!c%r\p;\E|Yݟ5nO9oHɟl(l~ݏʽTIߎV W|b:p~BZdsڌpI~+YO緌?J`flc7qN%sX%"}_|[?MQCT5"P)eu)jnAd5a共Sٻ6d !x:޼Y#^'_=}H)Raǻ8̦fIHUUWuׁ>9G +pBa:tb*/~$JiPEG|xX&snG?簂%:3;߆ƥ>| Ro'T_;sfqkwŢ wz;yQ|*۽ (6\gn? XJT޽>RQ-bA{vt{;.X|nѼYg?/QGȷiD\8sf\7WAEͻahnBnto_2p/2$8B,dy#,?E!/*I }RrzB`=8=3ւ`! 6JbҐ3F A45K-Ɛҍnp ?Z(tC()iVIKz޵D,ww ĻV>"zT׌^+DPEF) is:zRT7`t],44˵s1r?Z~owqna8<\bmt퀓A=M?I"5z;iKa&RJ%irFVYC[=rYLc1-U!:q:~!`staK.3Ηl|OO}76VA#~L}D^F ÷'6Wn\/.be]ЛhiMt\&_o6ߤ0Nju^ϹAyq~wE+ +Euu5@MN ƥggGRUUVzcLkZaI4WKCVQf[s(WX@yF@1F;"i=#mg40rS]ܤQ1C+Oc`tuEO":YnlTJ_$F5-:wf95DR/T*E'ǫ'0_@²bGq~^ E.}|_qB9O|{D]M((cZRG/e(g;PXP,Wl:,}/XGbhλ,CJ*"iU[]rG}[Qe7db53#u:8I2LJ%>@pKɐ)I;-t8֟>R&e$`7}"Fraep,gOFHFQgMYe%RK~W!w8$g3oݨsGTtcf%ic3ڡ[ k0@;}{kU@Ib)xPTw:5ǝ?@qQ7[3͹Tڕ[G()ܔ#ўD~-Q]2U>][@(k8Fl:;Pt0o,(V- s#q*&7&GcZNPT"Ҟe$5m:p{QUDčf9l0-yi?8&H&"SqxY?VdT#FN!2$qk.s$cʴ9l)$(?xS _93)a$ӘCN=$v>#+'H LqFM"g 0X}ڀSzBJJ剎[E)EحT*B*z ,^:ӪHU.6)n$wjުayQOU`f}=[ |i,BK7]wO,)(Jzb_X-]mE 6t6_B]| mf e\u|U_:ɋngwjH\x'kW-@qxtK*&jk7V{[{}r6we6ވΆք5}h$˦oZ{$v59P krL3emt.r\[m>Yn1eLU)/c{1fc{1fW,7lTxÌ kX %pp8WhQ\a.#WR?%抟U %X9FKIzڷQnXZ'ƒ[_pTRA' jdLSW=hpΒiQ8]j6X L}Ij@:pݚ@#U{e8$o?aRLDbBz aA8O)ckC+H ARG }LY)AsT{QjU{ʓya!  J/t2w9[3Og(&RIgՁ*]yV| "ٖMS*4[ 8:rvn,cqX$&G#uJTJ)⩳fVyR&ϱ Ѡ hR+ (ɵ:S VJtvVNɔ2XIB]1%AMpݩV#$e9r )c[.K^4J{(핍ҲƒF0 )f.H"/^rAes{{cŢϩKՁ*Y.U׸K좄D ÆkuQ#$T[ o0ŷ1Ԍ7ض`zV*UJF;N]}I11ŻLpDi}4gWͤ|&c .%o']0T 3c=s2xfܧi"CWy8z, 4Vq{psA@ [6CrO[x AabS{7.`L0$urUPlPR&=C+8K:fJP.BT0)l"F1$ҚJ9#kC6/.:" `PIdFD,g\xXTiF=sԣjI|u*7STk-JW_ 5GoUa FD1sX=U^G(hJ=H Ho>$0%QDlrqtb\#gS%XT<_?b>ܥL}(ղI[1Q%ضS O NMe> b5e|  ޣ2&LsG=iwhEmFXS+ٿ4U&d* N,}Ėא hꓼQȭE&F|5uE,T)0X%NuGn~y71iz7.AU}ʴ4,(=g&q ivX7J#v|tNTi$[ js"w!b]>'.+ԯRV/ m l~MiںkÉ(r;N徯A,Z֌UL7}Y[2w]`}"*b꨺ *@ޢFͫ}w~ be'7sH0y)8^ !'CA}TͫWQ5j^UUJA DU#P_c^dA# 2Ue.K}P+ jۇ1aL6ӧXc nӧ(awQ DҢ]bܳ˨u nq2ﱳ/"VF"U׮|b)u1GF|˻ٷ~rywq~TM >l񧳋hߍ֯#EE|‡{Pi0ee3@PL|uϊUeK)^pe霌j X8{dgw^ ;0CI$s6hB`^a͞Hz-gڄt"}_U&EĈ3 ;dFP4K@Rhh hs;ƙfV 41‰4(k 1YgtO]5Z5'tzY5,Mx+I^+Fi#پMkG cZv"VxBCVr_JB6daea&L1^9$-YӼYLK[~HTBU6-N.)"XÃᆪXT!P_eY1<3< )$M̝lPEFShC],mԨ⋿~9uF;U,ؚR%!O6j*^$ ɼF\ n$ :@nSגXI]KP+mGs((,x0T\3(P\z;5n 5qG[3@b4 a1r7klVC߰qCM'?Fu\ 4xYѺJg,ӌ;9YɥB* ΘM;FM^-Fo* d+ROc7y )ƹmSCb1l068z@J-C狵c秠sMDK*ŕ JsNufRj0|$ǘpx* 'Oy"us6!B`hU9gM+r(8u}k8I6J:G$m6oGqҚ&RYpv1fMJ("mz,yb[_CUwat-`4c-&NSٝmCNVCc+Lᆛ[ Q걪9Rnѵ^ Qh6#o '@M8NSi 2K8^gHze.x6gE _4wk9 m ƕCsDapN[ap8rՖ1G:pZi![M E!U2A }A%hg_2ؽ"fR6XJUB$"2И1$1@ ?H O`&r;S[X5:FpbcnNQ8hdllSb ɏ-*)N=pX(rOhed_σ|įwɫ$1x+bW(ӥS\2N'_KvZVC{3$Xp4vhTxy_e/C[?_q~ĶMfSg՛mm A1%ukXq&Yf5JSآu暊=Axۛjpֈy6Vu z$zp F@i{ zx7>$ xXE|^ M̼]_no7]}dxOoiT۩ҒD0շPzp)K}BcjHnhm-Ȩ ̼4zwcpÆt;g|6W>49j&Yp&oTF 5ҳZ&o;vP9+jq|,Ͻ̋<Ͼn^XsĪF&Tqm@Vh8 EXfBs%pyEm&=q/9#q+}Uu\{Z\{vY"g{>KŅ3&B*,X5^*Gûu\uGe@ bI,xVDRY.[9IM2el=`%цiUX `耜.ǮcthRX@AM>k]O=2~o $]B||F #pFKC&*פ67zflBckCe6ccph[]Cvh`26lԌ{J;crUl~].TGЍ-[-:uжp9̏Jg>aɎLN鯴qQQHwuF2)1erJ.]#/Z60KzS޻*U^IMehP7o~ s=~z;qICݣ#,Wu72آGfۈ*SIxx C\msm l|t*t|~B"!#HJ;M(Xlpm<Ӵڢ޶)Kb HFm+ .ʳ!0-dί^yܥW;/ɲģ[ArU.;P0E[^=4DZ'ZՃKs-͵Q 4RYHl 9pBv@b¡ڳrt+(}Ӭ#2 kԠA}ctfW m{m\Zo0Șk4/-h~RܱךT@[A ;|^}3 }$8 =8z(5FM[$U\SFxy&sʼRP6X*^Aez&|n7R"9y*{p? h$(XO(zX\-|::?5>̡g]Fk*tvIb˻D_Z|.Ӹ_:@X>5]k]Y1Ѿ._r}N?j+J)U)k$v!_SU߀ܺNioHpZ)7RΑz6UJ; 8ϼ2RgH *!Gg'69ɔoMT*򻇫{kK ; ѩ?Kܹ-'kE(M;\n"q~TFDZD{uoJBzn͇Ge׷7EP#E륅P J C&~{`Õb e)Ԉ؏%͞D1DrER=M@$ ۈk)^#O?\1G¨ݸt ޣIJ?_|J ^#g>!+ֳ9啶@gqdʼnQlI)Vs+&ϣobn~,}E̖c7UycaG֔DR㉤&}>]i$ =Hښb>|2ݤHF-YBX osrȎ3&)IfxcLK+3)P#sA}ca7+ބ,}Mǻnt` G]+ؿ5-a|G3%$7S<$Iovw8)rNpöᮾZm-AYHm %=ax%Fzoȗ\|3`1+IE?AHO7\?q!MW$"eWŹNgS(lw`>.b᪺V7Xqo֑\2ct70LY`,6 8#a672'|_q1zixnl]Vul}ˏ~7u~ίShJPoqu":uIh3geKόY2F eޕ5GrQ- `"2vBO  =$!i_ajM1aW_&D˪(,$VF Lby[M H|M =)!2 $Km\\z{pPJtFRr5~څ-ua|bJAX^QΙoCTRh!@T4/IU`v-uaH˩ᨡ.K")t4SOIq9P5yq=AP^ PY!i\gfH5:tl$vF#FY}-(kQ%j`Ncم-wo5{k e,jVMz5`:PIp:65:#4~>m i4xɠ\߅/H.8Aߑ]08:ȖJk?ܭd<޸yMĤ>2̲$19Ͻo37 Cv p Ϝ. ',/UPc.;Xauuwy79TH{rVZY% %gRTqSoK[Hh]q @'4U['xCΙIvc霉槉{|#դ("OZݥnx` h&{S7O,76EF7n) 1eYS+YS޹}-eM5(2fNm\w.O` mtzRQ#֊'Jy̒CPZʍ459s``eWBJCpEʂ($7@@3-Lvبuq2珟9kw JᐋRjbBRsSV U8rUU%xڠ, 6^s)b'wSxz\g#P9SP%ƟS¯JPeUxižZZ(*BYaQtLh 3YZlǦ~,nZFf{I=z:GVtju ՜w^0gf/W-\3[_'2OU1S߱AS϶ۙd>E /iWimi0ή/85-Ն?TO5VZ7.I2U<=Qv]InĈN;h+znɄj6$䍋hLղO݄: 햊A褾vyjnɄj6$䍋2ep)[nwQp;Zcdm~AIyR[ɳ/O7.#+gy>|(i*5)[]+L?o8E:*PWF,ѠXbމ`F'Öo S$`Js6Yjթbzg KTp,i$kfni{tf+]MU, M4.LC?Us)\lKӂYQ=<%M;fpvfv5WQvZ$ [!H*+?|}Ȟi&%/iX 4GyLݳFAp5ϥr7pgP_,i\SHd;ޱr>Zsj?w,)?º:bod6y1agd7,SF3թc͔C@.{{Md!Д@'J;^+b#V[\jl{YǼd`>}G&D92Y"ݝ&-/~_~ISW7o ߨ;[dd%DG5)8')y9ڮ@ֵ?!G]xA߰N4md4*Ù?#3oWKKrK~-6XZ]=䩻΢8ϽF&j35EQ/;fx*V^^ݞuЌ Hc Th"wjqu|W3rՆp^N{ yBqc{h H_P-ps2?M5kb敗~ wl2[ @Kf^~S \*[ P<\,qPr ϙ:WUQqc.r QH[ Ԍ&䟁gly~7 Oբǩ/z7Y$ 4 X$T=}֓-Cx2ׄۯSw(ܕy K]}tUWAW_uu]K w6ס2YTih^:8/U)"1(;EI[)ƍJ#(ޫubvF nk*gF5f[\6 )Tmmd/ hWs}ZihJ?f/SXٛ_zAZa-v!]4=POBΡrѽ͗L64HFd; 4SdB<} @:n 5iΨ)ԧ" I¬)569 B+Oq#nZTz Ou8^iGwܘ?;OCtb/F\FU>voXǸ 4H1{r PzP3p4]lϝ;4@khv߆* 0l{C ͑=1Y2k蟱F )Iʋ%AzC# 9 IJrvm8*w֪t~UPƎ4Hn^%g T4U(*-PE+J? 0IbbAbrDR,+uGp- Z)Q('tKrPUJS¬AZPhyqv/W,E@ &ⲡL/sKZ2~9򷡄4L OKq4#5i8;/+sO~KyFb|<)fY=Lh?Mv1OY{3SZbb}8Zdp|w dmN0$>z[I<} l A$HcU/11P\b ++)nB(4 k+{IYkMNބ\aTs;X1?rhR1@+Fcz~􌬘߆qMzd湘4 `D>U)z]f4)TFex%7kCo F phbPHpbx%5 h iR^$Y4 2@sAQhb "ۺ ڐ7. 2YOkڍU;vK FtRD x7TUҵvK&4V!!o\D)Л;?\uafṩZlwY[;w\3dADNꝌ$HnZ @P?[ Lhf"!3zrcfYč}"-Ej I’ZIԡYO^cɔcٲYc)mPoYn`j$xˏn `:&J`Q5̗`b9ټNdT-_wo0m--8k*R@gcs%DCK͡lBչz:,,U ̪flYxodku1c.Ֆ6JB϶S*lPW=v@l@ [*\ ,7̠Tn\VWȏxa&tE={la~.oՠA v5PÚzs轵qmesQog0j˛˰:29ւ_?>k(x5P8==v|6Cgq:Y`B\HV>WS6 43JXyg勳  IֺKկsY݈֋u.f=rT.A;Vn ix*XZ)!Clx_MG{|a?v"%\wfhgK1i]b#Njwk(m0C-6U-mP6:yj)YHZg{-0m@⍸ -yjp̒d5~\6[2f|uVqS!f>Oco.`mm*1l:I׳rİr:O/&Oݧ 6j!a._>]Ye".Y㈸AR9U0..HN*W;ݎͿs5w_\[ j95#i{4Y?x愈+nf eG(mB>gΊ44")De,(^J@xQ, 2p$smcY8kU^"hfz4IdTy> 4C)X(r 0:+a՟5Pa(,I>ES2&ZJh>V5M`WVɁBΚ&}*G86!UTH*Ү|$hcB>o;( 3HL {uQ:S>9X:6b ĖZZ]_&d}zI?Q,dEdby6O!-kAHmt4D9<ڌGJmGG[fhLep "9hdyuV [ϧ^Fϊ=+2zVzPI ie"rSP) IRxF!I15N]_@k|VLgPY1H `HU*#"S/޳QP:e&8|V/0N;ǩ\W.O4|d4(KsJgbM2Z 5[ElX8`}rDYUS [ u%9ڕ/ilPkcV ZZVa]A;-Ixۑ~#dJ2gHpD %~9]^}NPo{9u3!Q;>Gŝ#LrƐtFktΒ;fu6_vf#ӣp #?y{huzh&lOwt3yjc!nfAhu` z/>G{VPoӾ;s!X ]7]-7Ph2"M?Q0!7 KɅ4I>eJfosf6ԀaquOc$_~Z[e.ꉈT]b.gVq\%mcaಮKƝz2֢:L[w߫wňb {(3'Kkڙ,~x;Y%CU}'*V_&?voф1L[1-s!Q9&ʝDzC޹0c$% 2ħ;mi#V3Jt\#+d7^6=@gPld!Z3Jէ?sw 7l_EUVPs&UU7Fys75gws|1W*4D̜Fۥ征ZYݞʎ-7rG{EC[vQO h V'ŏz<˕61[ ٱ,γ%Z{+Do QEg=reL(cPQїOq-!$LY8{z(|(ɴߔO6ۅA).$Y<{䠺e f(W7_\+2cJڀ]u]I,wwۧSvwZ[oah2!v'@{h 8! q=IEz*I}Y]>Ӓ)d}kt0Ѐ>܍Zƒ1S&푵EYmGu5ME#>y+댷BZ[v_>(j{vUˆmU̿W ށV츍UI!_jK5S "RF!Nz 7+N 6XHo*Kkde8AEVAhgtbd>{L|&4$!R[mtihDb&Q"T  Ia`ڤ1 :%zDc%/[ЖyV ;S|sQS}9*5gaeh&Ve;DK ~fWKG0i]kl :8$J R:uwzWw)-1Y =`md"šuoj?@Fٗ-25Wi"#{flл.y9='@]#cmuSyhHZ>3ȝV>y b<)j#5 ya0Vg*&R4Z3nQrL+\YJpA)+Kz D2?[[ 'ѭ)[9kFgr&j l bxȊcA@6'+a{>|Č#D'_5ސ|@A}c}?_ Z`(;PKY>DTevFԝϞ#A53{?@RA]w:n׶6N/O{i/c2NXO5/cvQ7`lGKs[v:e`uq޺gXIvgڱ[Ԧc-wGNQN'^] #QgMђεŵ-g|/B3v]S{ J>&}ƴ Gq?5u}}pxoernP|X+Qڬgp/k)C#-^A|)N̦w46@Wk[;Y?e6GMs67~?'i.мolI[4ڋh u\Ot~28k.B{ 䨹C5xdZ~V5_y/p3(}Tp<4fUT| |s=a I2:@)4UMd)΂HB96T@dHRzNeT y1e%\0FH4u":Phe' B K܋,rG&{GJ!a*u%c ک8B&>)CRj)+dz -0hUr>T/[Akh8?ځAeܟn6!I?j-7eKS2@s_7?M  =m6l$3:TR-_R*IyY`J ABɬSDB3 s/jWFhLܜX<N7=%YÉh'֡ap P0#l䐽HUjYzl$*kj7x=} !o}g{ Rrٛ}Eq~u (lp=w4t?tru=)ٹ"lcy/K2Շ>&g(FoK_W)L| R YYAt  CTgؤ0;M,f>d1/˞dz$ |hNrR"1&l!`, fJ6&N]"L(db_=@⌊(zp& FR ;t4ٗPt.LN^ W@^ͻMf> LD# |^gS=Kw2M .1IXA#YF_Ml)ml0Wn҃.(`aG5'XX6c!~>4$Z!%2~(6bwq+7C ʼn3 (vZE%¼Qz Ez'FR4E`{hݑ*¹ݥ15RBbt}^3)sHu99cmJw_2r {[M@NzLpRpQlta&WaY9m'U666p[Y_'=Umx`voWlw[riHf"q5M,J쵀2<[h>Iːd={kvzƯ¹*m}SbŪG<8u#l1{ֽ17/g#[R)P޷{+\lZ5Nppjc;w#88)Y`lE954<}(O.@=N&,PXYrRb2\wn۸tp*I?Ib{L]L.-{ R~$ "S %#ϴ}TU5@ )c@mޜ^*KL)G2ԊX5uRJۻm޻ۤuto= !s^̚՛}C+$qI$&K+Ό6ɡv-Wйt ѐ|` ^B'tPmrz&?Ӌ@Jk )wt4j0Xi)`sG@i"ݽ.c3*(sˎnnKgsr9.5a(eLB: NSF]599/zqMl< Un*\~H=L c.GOy 7CJZ+rh*.JA+ eEL7Pz Pi$Lړx)>b s2V $*3C66wDt1N8!9ؔxCuW@>Ar5kq>[Lg@,ە~(elhIi~lTχ4Aʙ0PH%1lKGߖ|bCn'-.q_jbqu'OO;A8k{xs{dg[NrC7?Z㫏76~Kz`y&XW]n%֥WQhD7C bJT7noH/NjENxtAU ͞.Цq1kܤ-Lt|g Sˬr)ܽd&`XtpfgeaGkhݑ݌s;иmucY+) @KCBLTlόmlf.o7B̘x SiMCE"(=ĤcLh$*E)ꭐFЬ;rT.N$s9~ 4;6R4-GW;Mݏⴇw =F.ˍL%3L}(aLb58O6h Y-bMk =&p*Ǘ>fJ9{ƈOd l{K*+?q/p5,mF eY0HL%211<2UxQrhQEY&U& ¿ii$ɘ".LgP VjЃ3`d{Ա (%K-($(B-f_ƳBKƌ<nWuC_7%jH$*\le}p1ͬ19B$\9%OpxOBw\DoAG}ti Q*'Y,FC̵Y\0`Uc3uRbt003DN>}d> mY1?B]0*RȔ#^Z&d78^5J˜),IQh!/$Gi1F1lw$P{Em"Fw%AC ϴ;U-ݠ5&*4ÐVVLJGj9-&k3m C⥌~mՈ!(J OZPLlsњ`==ӳO9 1/h= i쪏5rT O!/sC#_ Ԇ.Sn7\FicCo0ՠ5j =jNЦ)%M&dF˙Ei)nͷRܠ5'›J}͈y,ذud"p j/ D#^m}YVYEdZԍC;-&؅AHZC&tZԍF`S7i0z.Wܗ"oG 'G!wht{YeӃ Ia(W,Nn3`\RyiLs`.88.ݦp"d ~m#aw+zʸx {Rpj2^Q`lNzSk3,o,/}<<6(BrgT*ҽo51>C,PTð_qsKy[W!b2F:DW@mko۸߯nWm~Vu3] u8y2!vՏD\ D]zX@#.޴AřDK Dt<\=<*) 8Y[CdaY hJicCno.6O۫|N6~>?{!K?j&J#!>[ O1DwÌ}FQLu횤ȣ'&WSw*z ^UY{vմ$-# OYad@3U=E5?4ϫvײEJ0gH QN2\mڼ פO &a ͔ٲF>zͩ"f 9Es %YVgsbAjkCس>B=8W}nK,]K2hI#hj5R=SF$ΉXrt!<@> tTl^}zmoHw_|t_Ù'*0K6 n_Ko5 6b4^l<y/BcV@KQ0k~ǃr. QcR|qF9~N>>-rgl{uwKi鋤aTЈ1iȥ9TOGb2Aܣ6j(uʱ 'RKarRx1L8ml }ҵZAjJszE'u8!CCk r2fF*S8:~lŨC6c'Zarnd=@>i%$^JB 5MʰOE).-l {N|?TB{z:_Ը@)V=7R_Z ;#ö6|ДŽ6|s'\v=GŞkYlo܌,d6jؽ}>rQ#t E5W=lIvU<޾yuP\ $a`@ۓʖW_E:ѬXtM߼V Pj ')JKCSP5+֙s–* ,OQ:NAR3 1RuNrh1G(T@Q(2yG6ARjADqĎfn:@TD>81pb9:eo(S^}D!liC[Б( D$tU3pf _>dkߊ!yBN|6_}AV ct~sxgbLclN.ǥ>@=ZFf֥R-Lx}pg4?ؾIF(:'oԹt ѐ|` 8c6֨k+13tnd 0bo◇ "&t'bznu~p8G-֧ns?!.FK$p!A%_ qfB.JZwץ y/Ok^G7A9ZGvq8Ol~ycƼ;(֭{̓iQij:=ލ79<9ZxW?gkYWW]Tnh>-xJa/TiQi%%RNrqsZPqdV $cGRCc?Я^,CȴB_iB|9]?.9c#M5I==~8d误9_Lq-^pX~{5}RH{眞?z_}Ft؝|z>^tëg= = /esuŁk^Uy\~(+x?~ގf~xsi|1.! Ի{7vcsflztuac ='>zh,=^֔^] #`0dȔ8:Fׯ˳rF6kX}h|8bo,ósus_O.ˎ{婛Xׯw f_4gеnHc$$ umGWl4\FLPI'r|]ِq;-}5my Iq Y(H\FpJ,Ov;gw'(&{5. G0m/2`JHQN6C`> ?5a27{NWz%m雽m}rv|䧽:XH7'Ov@\sm 4h+!8 kNJΤQXM=#쌰w ae2;#쌰3^nWV]z0`lnYuccB ЕDG$l#"r(nq*F8`RA#g  =Q!'G-dKl-dBf -,-s{+[ Ȣ*`~Щ*4O\#aZ [`Z6k[B SPG 2]ʞ4&̾34E Ea!C64"5֞J;S;a1J1r7ڮgF4b)\Rd.nT^[;qK_\F?_z_}QQq'w\"0t Ν2YhV}n~'FA*'ΡLZ2Gt!Qz xޫ,5IBb=4a(S]m.a`MzϞ5y?`fB%pR'=0kVW'Z?؛b9hE¸)ҫE$OOE:w (4X eDYZy$rHԘ8{S7ueUMM J^שxefR[尟:X Bd%ݨJMz`r"7#.>0MI`.:1 5SL0Az's0t#G/*gg9B#,5>a! B .C*@ @8;VhM]XH.W]Վy"$1S˰E\QSv`al-#h :S 4DpDh.Y*P\i{xtC]ͫFpݵ!9+bf!#z 䵬BFJYsKF nKD=sƘC̥ZE6wgKXﰐW@FFZ{DS2U%pbnSqPB,<\#D@rtI#r*0O¯$^+W 6e<![hMQ2RX)Q"7_ J-ip3t[;N-v`&Wk"W/PjPj@udh՜d F-*fap,-]R Ub 14ckK bF3@D0:+)aHko628|I[H`V!6*Z#2ɰE@@|am(/2^F"W O) \¥%0킡D P@A<#k o^q,ڌpk.t@4H EVDYEY491e)/{JTKּ̾(53J_ Wn,k ɘU "N  'OD's:NP!)f/-w2;z6,L;/ 6 uyd 0V '" P`Y &__Nai1`XtR $S(R&@Pv*?'6_dZƝn^g#j-؄43h)GψE8H0hg3i%B(G+UiFJ5iͷeqID|upVŐT\;ʋMiLTJ0'iP{pR6j3y'`DYp GH1~@9!ӻ"TvR>LW)PJ0&LeW(h^`,4zsnWږ .N|"ZP͖4eV#k9)9ErGED:&zrFVvR^+ #]w@5Uzޖ}H[eW'>CuG!AylN]Ha >x/QM:n|?Qu9P.G6qK#rSU-2m:e˓t(8Gz'qe,W*5I 2ʥ*TUO?\5WaUX7 gMku wźZZNX>kTom\_Ņ#mCNwI ZZ*`ʑZA xUD;ݿeiQP"խ TSP4@!yAm1 ŅkC0fK ×`3 v_|ҔN2w]#wmZ?VZk(?sm^5byj/RJ~^`+Ij.| |~RuszVQ׋ڗ~:@L=,FdžQAEYda:4isZdNbs'pa>_RTUy"n(PKH!!vm AeU\ |@L'}kyZ*CIg)afdz$Q`Zx`ޡ̔?har"nZJ$b.-&;bB^Lȋ y1!/&,o1a^9V&H:U0taxpIgq5 n5+)j躽ckTo $Rey+ڐMlOn=񟠑Q LZ2oA+ǻ~AiAli4ooA- wSp ڃ4-&D%D ,.ӑ`Z3k _p<{wk"akkk9r&T뵔㩾onBZhJ=[jQMak!y W xM~W/so/4QB;P8!=dEM݊Ft{ ^Kd UJ8PČ" i\xI]X~$ŹPЄiX&&Jg h)"K"x1#xQʒ2дYgs@thhN]c(d8S%>w ySɍ>7(DNcTAGV*h/uPoc &v`c8nuW0\nny [4zѪVw 3rx Թ[1 NkfaI98>q4ݯ@.K|y >jOMw{eFsG{cKtz :ؒ;8hFW_Hꞏ\ޝd` ׽u,z\aK5?zX֟\s (RْI.FM5n_z*qϿ,˻l~3p:jiNhU w;<<7ޟ{wQz'ZcHhƜ6U[CT+bǻx2iǏ~gb!k# "}r$YFY!υ[a^bDq'/39i[y73E_z Nd.+mzU_꽇%毪nx68f)=U_pGAA\Ve'Zu>?293ZJ 8Ӯ3**-Ds#dLyw%ǒ.|B=7S'hY{!ڛ][G,Ͽ$|1I%wVުb.'3TkNEAB{`$qЩ{.j*#:ϛ<|ԛr΅/h2 M9}9Swdady ؃hm<ֶۖ9^ퟥmKkڶ-mKZZ)JWy4(l(K/--H@y!0E4j`NkiD oi re;A[G#&#R\g.KJˎsaxU)})\B'h T ۇoc꠭t[A[A[t:C.&'u)tmxCM`6Z8! K>yxאJ3W~mߺ{>u}ksPXDS88<:Se"yJgBaЯ@Gr^νrQ?vҵ{N+iw D0J 3 r A_̭/8\ޚk˕-*>eqz$cpcd 8_.X=~ȳ 1E~tߏCtqNͤbh$L:ataw̻im;n;68A+gR{te~[,7D\D׳d<hҎt?#[-XTq5Q?ƓÐUydNŵ$gDX<1ztN:yWih!xSJAj^ gvf'ۧ{zYy FF`j==?}}vՍاϞ?O?{Ewuy΋L~1ݿ ՛ﻭ<}sRm+_t&Wہnf03n~)Em[ۂ7&Ke\P ŷG}`/Dc ǣh2$:,?U gJQ@pa]oWF@M:i率h\]ϣo09B;O==iڴ&:S< H3ShEqE-Ł8b8px29loSHeyE*AC*4[ Aѿhfp;Dtu?hw4;n!CNSgMM_&i tgc@ֿXF9&Í7M)[nywڷ0kQ7鷳ʬP>j**7jHo-byeYDA,e={X+ȋ g}Rz(=4e)D-,[ɾr4<=|TJ.>A%( FdA)J`ʸu+D.''eRq 8aFȘ[n2IJ5^2/3E ]c'0mbMCj83Aɗ-[ΫCݺae=$RȒq=Jgcq&0Eٕ@P%Sr7QSLFEVsSÔO!p3_ZMd͊QR $8̓f@`ۆA褴y޿\MpbtC$C@5cuid2㤶se9:*.Q,""!> ٟtJ<_ 4o4CƠufY^O%a}#pLqfdįӵѐsE0Js"Ř )#6BVo-1r%=zi$(&`v5 6RދfyX&ZY* A%ydBIg/2Qƫ/&fv9 k5+SRoºO&mQJcٛWKUko QHmRr&ENDl Y Uk.CZ+Jz7Y{n#~-"JJL_O[MȄU'ob(D"s5r\2 2Lʾuܦ %(&er!&":h̓fJhap:kb/'.a%Q{OC^_=JfKڂǵ(xӋS; @Oē/>_ҧ0 ͹=ܮ._?}5f{8:ޣَ#ss۽;vVxOߜpc&Pz+g ]q3;wxTV*lWqjqfGs.)ndzϼm35`ҸWCYp}Bٺ5uB҂m('- 1rs8ߌaѢ4r̀ mGyR spaŒ JsDX+U#ahp(ȥl׵/S kf͠fEiWEHr@oҞGKYxx,@VhW dHftbj9ڇW낥?/c&Lɚf_:YkFiC RkHk1Uq`S $BH>aEf.H ˠCPɖhׂ ~A qY1y?tPIrzbd\17LX6R3e玻̴=7c0S͎4'ܒX̋@" gYL/lL+$\'=arPCLB( o$Fƌjj)M$}IJ-[ ЁlzaU .a>feH^5LᐔBca.LB0At$m;2#rV&uk^^+e wJ.FI)jy:|cBrnڎg "LI]+aR%-da %mp.HjRvH3uaV%cu?u)WmIԚ!Dd]dÊ 7R)(FMun,*[!::r71ivL:IׅIM+ v<҉r8$+M5'717wO r[ԝד,XRFhehj|tsܶן )ˇRg}uG`3n u`\[o>NZUZtIHגMoH*_:$rLIxNJTE$s&(Kr]!g%__uWP`c",BԸLtjk|>QEq2slIF/@E9 5'ĔX!vd1!,dUZ[Tż`@z8g[(JHLslhypX/BnTu>iqD\]$3ܖ2o@C=|Pj#+X? o)Jg8x-Rŋ4DUWm`ОZd`Yp* - .X"޳$W}Irj d x!t\KLR7A{F/RӜ%$NO^]UY ]>T)QBԠAF=O^U'X3JTA5]r%/"3W0_XGA+%~m+P7J i{X`u'[V"U;3,@e*yeY" c*8^ k<.D$D|K2&0 [1nS|BNQNˁ]Ž Atۜ!¼~BjE a%U5 Brզ*tZbVe <ѶfM ۸L2Ml7ZQ6QVPy<%fbBP,VD RZe#*0Kv;a*% znQq&敎0z#ŵswE=x߷bP<&믫yalGL6gysP(yعq otPze$݃}I0BfPߠ@Y:TC5#l0BkOKTjfm]\Mm0d"uiQsck*zp]k96>!IPsԎ3 jzGs Y h[[h$tyzt胡Sr&P(L푱ה3/023 hev.͞2nR5k FNXB(-(Ci| v#lTt'U*%2kz%CK"+.Jd+E(-wYTdHں5UU1܀@_v 09)R@JYI 6_r&4QW&):7**BG[U3RQ9nhi:un扶u#Pъ<چh~e/'([e0WytY~[N2ǣ|B\4`\_S kץEpQf ;+st9E䊩"jΊӌI^FMbbBuZj 7b Թbh4;^94i(㑶m֔BX{4.>PgJzYNr`} QBf>J;FS+xx9t*ёj36GM4L~/vTbgR}^qt.}5gJy,-͹cf;cFbEE.RFU+җBz[j፳ApdܗEIuC̸ى-R\!JJ|$ÉLʆe.Q1YEgv4XV!̫S?TR "*o!Ȃt"cRPb( J!)s&5 A(x+f`Y'ku͊m8Lp2r0%* IM Y[|=0 #@cּՉ8 5,gƷ8'sO[-d z3m}Rn>'eP1u =R4KR Jsv DיQrJ'(e&Y3mH֭)n8 @'ZF jHbҪ~ga4h#mMF+C`d@r$H j2F$6]_ZR5kr t2d /waK3dCeX@ T{ue5D@cufC"^d pr' h!BEYa*|J | ha4="g>Bs-hH".H3tRUʁFpKЦq lJJ -Rx|Jӝ(XPGԘ+#HXdw/2mUN@: ;k{))IlB,#op 쓂aqI n@wc [%0[ A)xkPxCF6U9l݇*kD2F -Ʒ?WĈqtsK>}K'/n|WR$o^h H& %c^а$T( n*-W)=^6B :;.;ȣr"~ Їq)U&7<bv8 n7D5 )v.qnCɆXJj n(,[膈iIq+_.Mˇ2"!#ݑގTBm-CWwSO˿m̆_55>Pvχ5QM1E]hbtyw1jcP+sUХ,=:9Qb:@쨡kz2pKZaV1j`82)%&ZљLU8MgtmܖQ1j!ډ3!܀!uҁýT[Ss. ?9 D9H-܆K7ޡj ߌ+NϿ'{SN~9)-)(ٷ~m"]>x_|!t?NGs?)s2INяUvȏ&h: g+- wtQB[#,(+$ߩQMQ'xޜ/E+$W=AW}H Yϋ٢T 91[@hͦE{R̟{7PIG6ZT->Vǻwlx4h8GQ]cV(Wf7Cڭ[2ٱE6$<7lca1&7Fh#nixC v AP?;C [jT6Fj L̜&^㟼=Uj-MѝNf른>-n_#lx3J7̶pXQSՐ=d.(CwA ],gkȄcUcÏ.)\.0UJԨJm|tȊ5dm_Ԫwc q_VݦI) RH/pLc^hvY<@LPF2P4/j%m_Rt~P2"5IUSxȧn$~FA 7I@2=5? TJz_*v[R^ +?DN?CHֽ ]f˝.݀ hs^"{CF ɪAfv oԀGϚ`}bռy & ҇ÇcjX=_0dub'qW?+)ؐ+agNOK|V3ԯHܐ:#>_(lxD+a^곕_!9M_u9G%Q{[jIFmW?J#}U_U]DQiwoy|!S$9Ǐ(ޛ (knSDk%EO>34Zk޷>$;D l.^Kw_*Ͼ,n9{s:ecu=|SYKK+XӘ3,'#Bc;rJS-RAR7>M(ϙqQRSv;/?%)}QtP@ez&w:NX0vMNZ^z6x-3K9pu77$GISCM+` `J[A[ׁ0(=)jy-xwPnLװ7/aDw[`l[s=w>S Oj~Z2&ov@ ԓ~6ifz )7^&mpkip,py lst)Bjg;&KNylw;y.>ο]7?g;=Wcb#l98>X=AikS0@XAt6bHSbB44@ؾHcHa0[X3k{؁0+rL%Mzhl/;VHg/~ڇ#>Dy' H0R^ 阕S a7p(ߌƩhD>bL~ :9ց3 !~iW|OƿGk=+΁[;%Sg#{|{$S1{dIt9Ri<0Z^{hVK&ޏvYjs.0ʒ?e %8250 >$X}A ާHsu ٻ޶qfW`ϻ(boqzAl@Ĩc4o~ȎȑȱF7g!YIfC(}GSVxcHlT0[^lK<>#k8 "Cx}$^΀O< 7)p" gH0t,!Yd UpP|WQL+G:/3$3;R!ls JOVfm-~T&Ʋ}J*}tA}%Qr)Ա]JaiYmMGRO _el oroYxrzQvB .J(BQ~܋Ѝx-64{sWkMγGLnd绝s=U|xSDft8¹]h1mWkV9w7`#Ϩ "Ŧ+Q׍zރTʻXp fnwg,[4 W!mx7 3"Oϔ%+-ꧺfLB s3]tI bE6!^^a\Erh%kIL1ݐ5DQ9nGk :lmfBP*JG c RZ D~YJϋF2u<0eAΞ(2ۢUbm(͗RLtKt0z:jj$|ZΘc{$6PKLrKL=Y{'q%ʑXĴb^其zI:u8q˷c (I*D{{۽cuq\)6D1h;tZiH9>I_&hʑ9dAeCd+;/0[+D]8F5sB-a~Xl&Qpv9 Ljo]@X5JJ|&z[Z˓T#8TWl5%H5d^ x [ފw{th1VE셭e[ VCmz+ *):@H)uRa*FّSVF_'kLcrPe#P":E E$(c>f eJ^9Lc%QBZiN8~vP͘P3A"ƾp_Y|n$;8m#!Ivպx|USGu nj7Z瘋vMWb;_@ҚepP8s-,5Q/xknRKJ>󤚂é1 dEz4aWx9SЋ l`8C;}p⫨ Фp9G|%h8SO⛐RެI !*` HĎXZ$U䂂:fm9VD{'b:@1F7vZ( ޡ_Dӧ-)2=LIZ}b䣹y w5Z_fD"9YDI%aoE#gS*uc{.ȼz͵udR{AH /%$ r?^lΕ:AA "1 ҹ3+c>/QjfqqNq2NsVN %o9 k7)L|6&Mm_ *7~CSO@R_5eQP)MnK(ڟ(-ӍR(7sruSn[@ZɛEDl8nD `*\E,iFCIgcܔZ)LePC07?l%ܣƌOHR606m|`710Y LoN v{kVM۫[7a7|[U,m 8R eݗ"L>] eʇcMA[k|{.ikߋo.",9kn]FZPVc]gg vKa?ת5JJ<)Ga|/@q4Um Dw! MtYeUSVKF\RF.p=Bmbֆ :۾tn?'i턅I\3iv aX;H%ߦKg"6L6 RHK%YI[`^V( @lUK#$%y/.z/Sd:_a~.~p*C \hbH%’@˸0 Κ[ObAq$Y4±`eR[F.̄ ; ")RvwS.s zoqTng)mԱ5 mz{Wn!hS$_*SJulNn@^sP: SJ;;EVr̞S$9mP%}.^|vk5v2#YX9ge+|j fB?1QWh"U;~݊!% ˜Tr hvqۏIsqhhZCX"q'wN*H0j%hI(%dkDFK€ q8J0HueձjKG2?8ҭyk)D -RX+tiF&W^EEZmntuO}ɧVBU+VNl;"h%t -iob۩ʟD8;ulNdXggha2ϳQ|YKqg @1Ǫ@ cˆpniKe2s_#lo,+G+¯"AOy{xbLVqbw"RX0' ϗP' ;z O)%T0k\otF`z-IwlqZ yuo?{‚s4jPG_^{ߋcrPh?@_:`QCqufi-m)Λpl9HSHq#8pǂ Pq"\`QT 9(:Trq#:iX/_0k\`ݼ]d[ǿ_W{ 9|pFC[x#Sc<'#:b04ƁXlZ)E!籶a'u"̜0#Qd%Oi@t3:sV֧I!փLٴ6=MMOlSRTp|%|8|w %#C{}V,^fleu8oV&1㳫S&}n/+޷*if#Cxp]\tۨ&d3{@^1D=w"$NZ h0pv"{?_ _Dare8=3JN[/Pą`3xN k'wk'pgPXvOHFp ROdjGAhL7G[gc8tVd0L^pa4浚) XEz /|$8 .f[15;1x4_nt/\;_z'폌; u;N_:+)ONx຃cyb*,#Ob8~h(Z_e ? So I{^:CѺORwzf|WOӓ~ڸhqPÏ6o ^ͯe܅'G|0Ю{hQmb xxOK_gaXOW,7^Ld}?pIw;J`ʮ>}_Rq~2+0əy5/oz@ J ,Y8 Jc?L2K3xA)ԭo χ>mxL^MNǓū~&[z܉^jRf*ugZ||JVSFquSp֥u2F`G7.m`gCl Du |0ϩ{^:62#mP]` PlQf{%_ggp6sM8y/Ѝ>7^s <=Lݛ{een88üy0L\50Z'p^ :$uh&C\YCB)8W5Jxhd"C; +@k9" K e\՘J="!ۛ{%$EnƐ,c8q}c뵬d`W2aͅ)c`p6(g.ϽJ*Au'ԌM텕# m*rA-*pf|9Z 8|8/ә3 rN r|{7-|[Apx~<eW uoI?*z"MOU(4q-} %qڪJ"lWj1 *jTJ/hbI;VmYA<Ճ%zgCٜQo,t>WhV+ӰKZwjr+wow]("#DDl`uiqupo|7\>o,@àcr.5a0FJ"dP+Ԅ"i,ڔ,TgjnP\fak1i8!B8`85Ii:h;!C1Rz>W!MnD s< u #0Kp}7rB[ZDM2d\h\ Qs*q;ciT+r1CAIS9&0qHIf#6$'E*C8@+Q5%5U:AA "0CHblK5b'1$a"a8+y04`Q(j݋sK 'Bp(O ( C,QD,ft fbcc81j)CQ a+|јjc+]5$Ζ/"V[10R@/6Ʌ5 ")8/ PB q%Ĕ漁ZKix("  ;1U8Hb!2%+(!Ia$km8 9ŤQ]-nl lxɗr?-=S=9g80V/ztU GR0RTv \,ھL4Ǯd.%Kp`Bb Ɍ3*lU3G}R,,' mcLJ^!M1bֆ.$ n'Q!%dC<i\V ЫBFz~Q8bLH ##i%鉞1*1Pd4\9M!*u'22nzZ{Ag3N?ei& bAfVWU__MnW_ur{ˀyfE~~M]^E@Y1XūGv(⊅o /Dc(xq[-7/ W_dX.G,G߽}yFfrǦXC2 h-Y rz^'A"#o#6ž>,0*i=b 'Dzr\rn=Ǩn5QT9KX9.}[]wɕ0Iׇ"rմ֪Aŋݕ|Rt*I /M jT6j^$"D?K֪,2)ZA =Wq̋ mYo+ dzGjQ{d0抏d.0!>"J{(}\%6eo{[Oz;JBE3?4o6bcon}%Lu??'*~8jp9*"~U܄U%oBE1H \rLl?!vub-`}jrF}4-xiRpAUUژ/RXUK=+~L?KE)R CSխ$[oտ_V}z^rzo~Ec}͛2kvo %]qk?p?~7/$|u}5r^w7T9goH3!y7o/I]~xiup;M/ȯMWT{ MBpK>%~]T+Lnco༾s>>7Uu'wyGe5Y>^vjWӵw1V11A hX 瑏, DGET,0 Ƣ2:&Tt@ !nօ0`df B30ցM1Rsbz'H[+IYXfVNqT6-ʁ`ߑ(^Q@K@jR65SFlySҵW*]j[qqж/Tjg4VN|+WV༦/3^6[_(3mwqQfmLd̴ GCeuxLRp{/xvYw^лsr8p,\Fc[xm4:IK-U3< :]kry=tmzQ]K~ΰ,ۀq. ]TS0-{ &2}U0Pp1AьER*5 Br`3G6!01蕵^!4LeK.3 ܣ,ĦY 㴵*>NRv<OR3SP';9:.4e9"J;(JC02 بy̖M5Vip{ZтGXCF*I{+.:9)Զi1`3S1`}KTGw+;@E\ )`A/:vN ]Bv"=3kLH,^eW`+:r :(2T1C5&ш*98`hLW,+VY`'8ӕb:uYkgVJ m;,^Nt,iߓ դPNW1ah筦̈́q-nJoӢICJr`!ŭi7I0YxI6.SI^YJ- Mn8o-n0AF[l=:ť"md_5 %4vnӢ'҆º~냗t 83LF2uK2i+U+nӃ?8\ۃ =q8q{}xM D[HF/\%σ+=)%r6yBVsK?\F^)#"s\dbEI$"lqtA혃\"*z@u@)?zձAsfO'Hnxqb{5u0Z,ItgT ᖫ9QSZcZ<6 Rc4Vc&rc+W+ªvkƭ"EXQ)m&v"q)\hǎϒZp'um)ĚygmGs?,ߛ2ՊKkqOv=nmZްJxn.G7d|}9 OPx#G4 cuSIM_q5' k6ܟ\LNl b)d;O ൄ,&؈fo:mŎr: ƃfKѨp{dm:p㓢=Ʒܹv9`Z2LX ;=䚯UƷj@T֩y7} (l ϤoM7ed)K̎êZ t%//~ǚ$ƚ6hdM߉5FK.b~ѳ{<aiřexY{L*v3ъ+\ V:ZJ.x:+ u:sPaߓJ݀$>!Ex暈a2Nn[Qoݜ4 vÄNN[n 1P cD&jq@yRUc_{+9|о={XXHcd,v(e7?k(Xh`Uc01sAVg4ie$kZͣ6j =12qѿVyҠ9.fsV||8O.U-p-[ʹߏȉiuRxpFաQu%*On]t#Y$w/ aza,xx.1dQI@HK|VN V[M`̏^>ϛ stڸQQTc@FJY2rt{?|k1u3vX;1Vl}Hb|w\(Wx.|BTD\sbFK,% 3eTRt>jTk1^b%Іi܁LpdhP5,sa6֢ٔq]բvl AfI,dh:8Q8Sq(PJ.âs"%`ҹ *J0%^"&&bNic5H9U0УɝXm.\.YZ@SAHNU$W>"LJ(f3X\gVyGv9l}^S$t+{i=Y,x^qT$PHР}d:ZF0zrŤSSpGJRah[M{7!M4h䥽w nc$v: ƸYKn !vޒ*J'EgRdX'RkY/G G̙tؑW1xKVxB*9HF+r^ C<~).E@5ta͍db[+ 5G! VVm`b.#1LBz\}Dl"72N&w6I' /șf *g J = FԤȷ5 zR &8ȍj-kl Vf}Vq@RYBd H;semo 2Wo)leƻX]UjzX)F&'bu3juxf$Z3EPZa &ƌA*h)˅Qx,jP(qΊܟ^E'RAl|A < HV f#YE&U+끘@IՕKD3,9 QX$㳫ݣ۸%% 0`_^_&nvfmkr%cX{wp~JG |'x/ gD܏-' z\Ɗdv2p6в!W|/}M{foЦd#'ousPv7{Wȍ/w erA&=JI3"}nYnlZ3c[fW?*ŧ)U*🦓yxؔcwog/ف7rw'ר\5hܫʍÍRKz`+?WisO$vKZjj罫'x_Hzۻ5 MϠi*mIkp0yx̋C43tW^_4øNy)J6AսeՊot/x[t]KR#*vęഥ0 2j 4UmJdf WyYEjHtUVRˠ̡(;[Szsc\-2iP*4%*o~bi!)x0D>R=95!gSe9Gj/ q_M5jՀk"Q>e̛%ꞷcPμW Tzwmqk&."eSB D5d4LbjO&&We?v :;1TFtZ uQuvJ.CJaPpzG% rJi|SPįj wh7 ;J Ýql%Jszx/?~F0[2 ݲyxSaJ{$?k_Kow^p툋!핟BF chAqsoK 񻀴7?߾/ۇ{ZnQK%~^ş ؁ ś"@=`sxQ!Z2Gk~cX;Ms_ͩϳ 3=-Due cQ5_eλtuv~fc  Y ,3>MOWnSt~}7ˮWQW|yX W\ zH Vt&M09U;W)K3m&RE 44d'NgjEupe("Q!\A:W5}ӖzO!s[U/۾%^NI4kڸs:6-NtͥZ@`QvvMI#܊СxSN[J ]]YPpѿqncٽY͒",;+7=j:Ncfoݸ}ypҤ&@1M*q(Tr][z_͒w-\&Kf](9)d*292BV]}jRr429MwmɫԕI?3xSՍ-}GAf q-.YE}8-A`IPt|u !?MWKOZeZj2^I@ik.5>i. x{Fɝ=b. Ȕ:GS&<Ϩg15.gLgJ[qN"2Y4Vj%h,m]Agdz}3DbpV,t![ XcfE$DZdODUH <D1%T2\3C;,mF)56ΡqM S/yd eRӜEpӲc/v)Dp wRJ:mJNER ES!4D"I)x_#9'0q0ŷ STnQdʩOm eZcX,>/2E|)-u\Z1QmUHzzUlqkՁd:h(wFkvB+@+,rFGK1S{"JC[RrYm9B4SF~C?ԣ6AMI"OE:Ǚ>9V&fEL.śTNHg3sU>)Hl-+ds3W✎ygs|J8J?{qjB~y~~+ɰ'Xuw]M1A+uՇكZ@)u:H5aC<5{`Uvx+$+c_EBr-*'gt=~ە 5NrkČXÿ|y:%ހg5-5çwς7_J9?3h70Y ;kqcEW^uʝ"MdU_5B~f{JR7/0fbH<~q]N:)Kjd}5ΛMeȕW"S ?ȍ) OXC}4X+kb q/Uzѹ&04wq>_S(WWiO'|v~?[d6M޺)TBP'|}o ~iw3RW30pfPebFc?},7FplI˱vp^ _'|w/D>ϫN8.=u{:H &Z/q; yBJ(-:;=ĜMMc 6xgR12#2Jͥv6N LRS .2]j8GT.H:۬yM[c1r-+ Jf)Iˆ͘!i H,OsƄ"@d4{dOl/#n9_ 0Z]GoaRDtv^kJKK9JL40#pw&蘕9a0Ns `UF\KI@kȤ2Vg]&\GO V Pp<.R("C5X@,`qy-AS&+Tg'CHw]JDV2p(WG`d m6@F$݉Zx듻o236>ZhNny&@[y {Z.קUhSHuA&d2m=sJde+ 4Vɱ?Js{5O3~Zѐtf|ypk'QXS`eJg=PRVCi(+z$=%+J/oZԙD螪էo{Q%fqi$ޮmX4ZO3Dztc&Y b0 J{W ;^FyHC0%\ Ƌ,>i/ZQ{#/AXR4O`eQƪ_?>` bOw3c_&Y8Av&M0`7U^jVQ$EkV{m8/R$4*Kv&2&c;ǻyGݕ"z70IyߴӼGe/>#*n6oh t>Kio&\ˡ:RUtnty_~m# 3 N.wo[zQT=Я =OW/T6N$L||]b(P[@%{j}+IטLv羃¤>*(]?Y9{5i& T/Pk;cƕz"ku&j˽@ <rP(Sv^ՑV/XăwZY]B)v {Kekh{cz5sm:HEzW8ˆu9P7x~o&_fOOpvLJrxɷ)&l%Yx?ne ?çwRRa.&XlRVoKW˳֑OKV/2㵒&#i@$&yh^Z<㼕CϲGV}:vNN`g#ӥ(Hڕƣ}PFqQ'.H2 Bv&Ƥ~e^;h8Ѵ>z{ef-8i{} JI3 RWM(NsP㑗LOeP/ƣv8yKF@)|a+^dҺ ]`TVOἜ59\?}~B~NNN]=H`p^f QCp^Zp %y S]"71@֊\F .DR._l83S'_o$4kq+B@me+;L޵>#E/s{t~*VR/-$9ٚzؔDZ ARR);@ht׋wοA)0,_hw׽V|i"UA*#HF$xj7#\Q,eG໮0PJDnk8:b.xRCC1וNVG w6ZUo1mPkC>WMeo c7i!=|(=.V)=*v?z3{weZ6V=~:suӛ|Q?tS瓏7է5c=v372O+7jε+S0~[#[15Yrm's#Eb:h-'[j2#*sTc{h+˭~{l1vsZWORQi[OeN9lR9&h:8*hqR_%H ʰ<:Qs~] ^ϻ鲀Yprfq=s4[ɠ"8'WpN>^1$!sb!ұݏ+2AN) #AHJFE'q!1wBBS*t՜QcNRɕQ*agr2:0VL=A: 8xhۡ@d OOߧՙJBK%fPA=&E Y LB*]!ͱ-9NAxUw>oct )T\G1_)MPa,`#H'`oENJÎ;'p U:A"A&n_?8D QBg2}HbcGƢ(Ce7 52T'Zv0*XW@awnreӃťث[1YMCw:7\ m<&5cy;J5O\*yRQh!a6BVzGApga!6!qY]7c0Lh¸UkF53 $J#^Xdt#р'eV4GW_a*XsA-<~3+8:#X}B="$ фژ-{w'6_M˞1#~>Ku[35~1?ۻ܄ ,|r=|:ZRS;ρonBD5s^ S_ۥ]`ۏ?}򋉙ϯ/>lG>"]a~qU:sEJ1jT' n1YǞ`c ll#Y+|0Vv'[0B; G]T$x 5gJjF0)x=$*\)n@^Tp z(1O}矖!Xbvw;AϞH?{uVum{* }JTX02L%yƸ™b9rK-<Ʃ]`FP*v1lĖrA l ojx i'"7Uk .#THJY!%I9cR N0ʔr/%1L. ݸ G]64^kdɭ`q}!0HV;Uõܢ@+]cb&B:P9NzX0a[`_LLa5:w1!uK5ԟQtj*f p?0 IoO^,4:K ( Zўt4C\j!UoLqJ߮T&8vޞnN9${LGrZрlɑf9O*.0#Yz'}~IhtlͧY' \Re)OEWz!j__ꋑŃ#IqྥCSvD _+ %ePidr2f5}lZ8S8`(g/ŝ1F1Ia ,qȸTa1.\ƭe A%2iE4B$^1}`ztج.z1׫3jJ\'0?R m/qV@VO0FS.5jtIV|]蠟t1xiFGkT;i&u)8GX֡^ۋ̗dE~{~#[72N.6ܛ՛W͵}UpdEe~{w:riJQ_A٨Z^p8+2)@dԉLo3oAT PUU29e.}J(捕h 0~1/z[N.n/&~R_OuҗgO1I$ \M2Y3oBϓpUx57M7bȬǥ!%J3~5zy唧ɇGKRu1-g,[~6) `pe1_~v{ôDDpK*!O?ܾ3FRK,MVO?@&j';K|dTڝI1ow9)jz{JiwT|-KɣK~NS _ZQg5}l4wԦ{)TAcR(-8r+':V  szpfVQZH}A7f9S$"o2YU؜3ƳŽT tk]\ZL 智vՌL/|͂}47r 6˙}į+gI: {pjTX2IVznuv++xnbHrEYMN1g}\ ީpq$35[C#0ykA9#c5`mYQA5trrТG{R wWGRYr@j~F Ow)_&Lȿܓ@lA|1j=> 2M(&p|~3\@&o.E8Gw壻`w{,-)J"RsqRH8I i,0OKz~vÒ7Ԏ^O>- C,f/{nsKk$5Rc,oB4R$+m7^':K7ΐJΐΐF*z5,kyǨ>u԰*ٟ3ٹ}`mَ}Aև*w&_y/! >F փOZ(i/Uw3N]{7z7WdtlQb;rh1 Ý:lh<$#f4fdl+y-_Υu]1D2#M?(lx7#5zCm5zGNeeO88J_;z7|#<ӄf%W{zğwHTI|h uZ S4ʵhN(IS;o/Âs) (ykVR1aw#t%3͘Ufu0X . Ui}ܘ0b=|,cX /"£B`έNXf^|D$2(cBv2+n˱`_8YP"IjBMT  Ό#ǖcΛ6ֵNzc.yn8u%bHXF2$a #-~.QS:q_I 긫o#S T29.<|V Yq(Dr~XYګ[֓F44]?lQ?~LiW?^߂gnߜ]Eb(Gbͅːߪg9L^ * Ӎ/b đnsI`Raz6[t z]{%0{=gIg+-CK7VTPYI^W%i`rĊ 5)8E&ԕfpCXk3A#B6iJ^s@ae]ojwBDtt1]DZ2ْF9kfA5S\3[-Nz[~r] BBҎ{d|f,5l?(ypۢ'ݱ}-T9x~JI2;ږ`M # l@P'˼$^+ټg МC Hb2\V"۸N\:ST+ijP!pu1sOZj2=لɡ(c&ڠă".0gKO\lr7d^|ЮcĐJkRWj %TȀX[-quø:x̺ XuZj2 aڏF%խ{tϨ[1){?G:3@<^:y*64ĺQ˶FA^\M,(cRN)BG4%Zýi}fA >ъ gJUq7gTJ3Z 흅4,K/faTP#lE[փj2Bx^ӳ8#9Sjm6TA9"$m^!sesPkWpA!JaX 1q~1Ƀ5*;  e)EIkmnVESysfdNZgf:M`FTQvz uHm(\<88:qlD,sa.*G{l^s..Ξ_D©f98[nu'zčSvU*r#!9^s&S,1ڄW]ʉ}[xٽ? KVuo($6akWrCTGr(k[y6k_ZJSKKO3Yh0ohydc# EX*µB^dwBI-+r)wc QQ"d݁{uZIJ6آdU{b1jPQI[JZyX"ȤxXAFnC+e\C)<}}:61o|V? Ss0E>JH 3#J9KXc_%ˈ@(40,V]q(X SRNbahmhExZ9I+Hꇭ&Ɖʌ+G֣NvՉtO\[ip$P,D-e7]n걅Ԋ`z`((h#풨˜ Bod`He?l^ݞ?~oZYI}Ikkdbx/|Zㅷ o/8)zA eV!ձґ1̵drXHC(@/x1b#[_):i6haUEI_'-@ҷ?l/]V7M"o[s)/n蛊DE |#* 'wY04Ea/M[edWSlAPeCEVASG&eot]lJ x} sd0y92Ɔtֲ-BQM9,h@MbAgsRRZ/R{'^>HcDnlS7.AkcNgq qaB+B rN  PnPĵ"KYgMU~{p݋[3l-2&A9I5嶎<UYogr8rC8Y:5fZ'}G gV>lz)6Qjد#-cq0|`O;CZ+)P<iV Nhk ^E};{b ppVͥ_S㟞[n|EJ2v4:UUmJFE00Th@GZʙY"jRC o!9j,7څ "-pLAZ1Vrw PuNLvWz8HWɀ"hmM]/ET1( ).ց1NXPH%áP &:!%obц*ܚX$#*BF  f\sn1( )6�C`mZ/']Qӣsg` tI#Ly(ƂM:3fc˙3x%Jbȉ.yG&nby+h-KчqI`MZO|>EOh&pi/O(!vWI#_kO~ӣ\٥/t:I8Bk%KZcmMKhTf+aql' " Xi#A- 0`dV,9zH] vМa,(oqxZs;bG!D=2\F%\0iUV0c<6qȍU,xOaİ1.C661 i yS\ܦȅ&Lfb`[.9/o@p?2zhx~CV:Zœɍ'N&^ &ͥBm݅Ş7?;Px†]B. >Xt{_F!d"x+]kM Hc۳E?~ K:?=:sga$'%}G#9ft\՝ܽRsjZg2w?]Fjex)a~Cj *j]ZUBMnOnoKtChNkm+jv"-y> [r>GfZ#-|NZһHkG2C0-X#A=8!Vt5zesqEް6ֿA0FpTs9HI9L1,-GgtoH=!{#7WߐDq(BL/hfHZoHYwt}Yk#MA|AalK s)PFlxO.iOgtx}]j=zx= #Gq&ѓ*VGJaL֭9^!1Bj}zNinsAvE"\)YYG9Yi.!FD Olv,P ,(O6")6pDwTY&B "P i6wf,D[?CQHW;?5)Ju hv.?0ˉw9>ٺ\* ,D7xp(I AY-$|Cj,?|pٴ~'kvhh$ kZOa"ׇg `erz쨫<wk%%Q7(0pc $M>O>d?s}>vTN"Iȓ$_L?%4Q_ϔxwJig8OHax۩i:IhFm\'^G0Rv0; \fF?i{峋޼~sqt]'N;NXzٯ??'??{ųoztөuҙ~:w]wG4In_vo/{{;^t,7?(t?Nay:'nlO2=Ncu1y4 }!\QҽqdHVX/mۏg2L[Xo"מ|i 3)q~U+h," kp{O$=9Cwu2%RpeV_j?Cow-%'ۤup{^9K/_l/Ɠ7A 7~ {5u΋+V绥 Є^w܅gtZw忿jjԽc ӗa+ uY?@oFم_s:N2_dY74{i򴨟GpЦIGOI^1,E 6jv ?؀t"~kvPg`xtىM/q3&߼} O)KE]gL=̛[%oZIi 1.=,ҁqr_Z=\CG@ËX/`^ 4`"ҁNFXQivZuR`<{'sR%O2QƘ@!ɲ|x ppq+-gi4?yw(\Ns-"w*o:egEtYK$@eg5@ v*䶕l:)B .%y.P"r2 eЊnvn;tT™p%bdurBvKXWA#yWv  l; 83Ec%,lU_Tɔ|gSH I\13DEɣ 'T h!DIBb͝ 31 ǎaNBJ3]JrjUcPcX$"ÖEƧ'T`F2F08Q"#CBaI*e=+׮ZE+.3mG)Z0L%fZ4]u)sqq[N*EЛ JdXי ꧖tBY8SU1qkeΜ{]]|Q{m.a˔hMoXU!f_Mn˾ P'/ [)E+Ga\Nx}6d\މ3R0VW0 Ih4ooqT4^oAJX=l p8U6T`-di ḦP -`Ja,P:ǚU%oIr3yxKARmn + *[-ZPeO 6QOhB%S0 5ukFА4m7p| QR`D X) кC27P.ʺHk a@v>Gl3l%׼i-ҲUF>T?&I~L\Ԁ^,J"ZܲeY2sC^K'x]ŒYlEK*AB 37 AZ0^J!C!0Ʊc Ske$7XJH~δ%.Q -)#YB/svGŀ7X`Id!Uf"Q2Iٛ,?$% Þ@LWU]U o_aVF[ȼ+emi{(k[˾3vӿ3?_EViY7ܿ܂f'}tB ^)f4 +{(|o{p(Z--o\ &p?f/bqL\p;E62۬85xQjҿmE-r˰ ʹb8fst9zLFWHrd(nK, wW. @vWB ζXh1{8;_8lSF\WgSHZ>̗O-)A.}%?Z ~po>|^i)%erYLZ6Qf^t">{.]@_I(CyE0o1BaզTk:mSn<@b$⯶V:Bupwd8s07WLiZ!njO6B(ZIԴ鄋p]~@0f$<1Z2r9k3F V%Z밴:(eZeP]E}e7zYK]&yyA9k#+=4k)JS:,EQxn{;. n6;ƨܥr)@:1aE>rVD.cUNcYOOwUp+g )5nR%e ?*˶76EG^=Ǫ4*MgOa:^Ke<͕'e<_KM.`a^V~W38(4_Nc⪧f1sU+9G9XTdR,ΑFXc"cXJG 6ұ2ؘJIF 6(kR—ъi`T⤃qL8Uߧr')qd :&w\ l⎑JvihɆzY]nkv nn<- 1cFќ w``#dSA;pAke,DCp:!4L]H0-TX"ѝSهPZ(} ƊXd'&^m u fB5)X')~^ti &wZS: ׆wjk^A啡E@wd~x@5_~RR!YX*ޝh}(]QB#[#tNcmR^{dMj(e/hUH'_@vx% {ٶȣH b6#Q(6~Ov4 {a*@\d&o@wKd?R;3Yfƣ8󇊤1PM}l돆=sx2ֹCœy J{))_-\la(Al _$&tӭ}K`f^c ?S<`m[1.r̩FC_<BRgyZZhI2+ a2aӌWqϞx="%` <8Xp5Uڑ6~,#\!"µO,W[f<1&k#$}_>M/@kV>u$7rb 'EN:? :@~2@u;ocB2NBV!=tK˪+/q#tHn fn5MP!iiZ;l>H(%'4H G6U 2*fCqIIaWqBR2eDDA]A agX(7>R8&@ LSPiYƚxN#f3` "$}ZG.'0ksԘ4Q[C6H] 4]NZP'h )itĈ)(~K"T8,^O tZQ^>,l#hw ixh1^e<8ᄇ85o?_n/Rv1TksKmX0F![ðBo-@z=yڀedmz&|\߾9# 3t6_-@ ޜZ7Bwld<JM&0>DjQ9~g:ה''r]~ʔJ|HZbYE7 ]Pf h!8$M<$XdžJI'P0v\I&!f}QU X|R 3 @@Tb̼i dZv"9Y!u cK:@ *hn%>8{q3pX7ޱ$G.a#T h$LCqlD0pG ,Au4x=6*9RRх_D2E.b4Q MQl&:5#J<3â^FU-߾DW4yqA"Dwr>Ftr Bx*N5oRzTU1E;RG  v"sSHG,(ƭF$̢AJϑ-T ɑaacQʀYFzxz&.;Yhm[ ǂ=asL3bq1$C;aidȽP aýEQSWxH(Ff&0 E`k1d Xlz#6:ZqDJzgkQbQ:FZ⓶nD1z$gnrA%cA{_ihdzi Ug_F(92)htx6[x0ſOO,ߜݮmz?0"^;.`x`N0J3 TG=̵dqIJC('hmdT dO0W! -.Rh<]oOh\'[=8rZ%@sr@VʫԾSR:D嶂"BrNr@a 2iVK t-yL[p -Q_磿O&OIh|xx g-d`d ñp5 W,|,THă!Tمb(ͣf2^TOF9qtw&уd쁣!)u[h1r0q%-S/gQ9¾Vy]w7)Z t@9D\ I|W"9+ ︇<5Bޣlg 9_^$l0ٞ~^(w4rXV!,Yަm+]>,;P<6ɺz$ ߙGgeT NzϦ%4)Rhh蘭[G篂LY8 *": w[0NYz Yѷbaf!hp[k]^0kAPj̇;z=Z}=zl&cgst9zL^8ǣ̲9=L>8ez}f2` u8<$;! ݇e<(/; &ˠh}SO+ AGq1|(+$׵zGb9QQ-E$1, aq H"hR")} 0رƅMa2Nc"guWр)mb,Ղ:E_EǴCQz5.|BwLͮLn{)'ciڄ+"9U LzEGY:3PUciϮ`rJꯥL-2(UHgW49<3 uLVRHJEqpSX{H`y,'c v >%"NRӟcuKlϮdr\,"dg쮻},>Ĺxv%UIr!:Jww9%_}]FLC啙Gl_8u wZߠ+-R鱢(#9>C$%;]JPJENm*%7ٻm-W|ۋU/aPtf {DO8c{t~N(lS֋X@Ѣ3t_`I+k Ua ߻pQᲚ0tﰺLԯܸk4.c!YZʬL>4dƅwLthR3PG4f^_yc rCN3\YI.%t.Y\<ÌRq>.Rg O!Unrj W0zT9 qG鸏Gx,\1s6XQ į1X!^na7R}$(K镗LW[LDp`EAV[Çˋjnb{_E.n7F\p>wej=^Ə[2B}T*4d^aof֯ip1jJߎzI}Z zQaRtcow!P7O]^k|P ]*Ncjqtƀ Uq\ Tta>QpxwQ'XUd"E.E' ܰ)]}J EQ*Nm. Y}/8_P a+zD(ʚX\U]zW,Po[d_Zd'7v7?ZtFbYZt{KP} Yr̿0MN.|vs1Be" fNxI;"XLˇcWy 1@ :3}jiWH~gS+?.K`6_iy+%הh I jDm.|s}Ƽcլ;jV4'C(莌͢嗯 ǣUܩBmSݾvpB<0%<J@ǶkՆ|ðI<0JQ,%CF =hMPX,؋;->ډ#4y@]3Iȕ<a.+o;k0-8 u{0ڀCk47oRyF[PKF|FɆ Y : )^!b*<\?X/SJe~]znua&˟f}OjAa^7}iY%U#&FlPhdVK(9IJr2E4ܸ 2ʵ55nk׫' C6V01$wӌ: ̆VZ*5Xc,H*D-fH5tN]{Bzci[?zdBLPl|@X֩ڀt〶T н&^AF[^t) 1,e4mJ u!G]e2Eng_qq!LC6-KaDv+XW2DCj'Xe 5N8m䟣|so/*1Ŀ*⵬\V|Y 7˜_zBV# @g)8^:oJ2̖[խ/&cD&R JdbIP.cv5S[@x1bo5pL@B hR\Ѥ5=#m:9`gBݸtQ칅YIr,i{6gMsXN`}i.qT84*ր龴>?W3E؞D jxvK"mnbT.Vsu\2+Y<4rmPi.UGBEdWºL{ΦlZKd&u"^. .Op? ASruB LQS# brl冔*1-ſaX#Hv CFƫ(^vcpMB3]RϯƆGkܝg-x۞.ZRF٪6<iGflW j$@s%(ۅ :`Xq];QűѺ!TҞpHSQR;(u3|-kOe1gߎ^*"kdݞ|w]nk T(pBȧUHpڑPM<i> C Vkt;nƜ'Le^xQ92bAf"G8.UPAˈR4㢱޶(dXPQx };T9;Pw !g*<;f&OxsJkX8,G< 3.oPڥInOyLC4|-]B46%hQ'ܪLRT񒈡F̳Kh6T("`k+ .OoVІN4kc @js[ Rat 7;1Tz~QbZ2-v<@GBQ653eͨ8 ÄgexK?8˫).p7ڊ4}>B 2Z{`"#c;FNZDXQe_nSr-(G0FHGԡZ'k%$xI㤣gY&p9p9RrqEÝUex\0sU`Lc?nH͸,P5i~?Y떋O^O,we dOk.ݦȝ_a]Ln/nl h)ƌk:7P+ge.pɤDC#>ƲgnV+vVfkQRZv Fk98# kαÃS̩$\&_2%IX bR)A@g\9[d gЕ7qq۩B֢ TZ2Xe9qd^'0Vߛo*C:_ʺfg'Fqڋ_+BWݧJ#CsesB@ jq!Iᏻˎw &-iMUֺ-Fh bZb=yo%@DoFwQ@)FOzx"M)ُ~[K:S4~9L6!VHP`~eS%W>j,\î 8N8XPm5Ý2&!bU>Zk5Ƴ yD?}ў-ϳvNEHk~/P4];={֔TAQ$F6gJvB:M(hJ,wG]0^%zBHQZ}k/PY;:F@vy˘ߋSV!bgӥbc(4U{&b ']/ݴ׹U(ag;J#tCKγwiM 3Y}nvH]Ch x~u=\zg^oy zW5 7\̲&9/]%-x-5(֪/vnun:B2\J1Lᐌwz|Ṫ(.-~= "FD[& NЄV 2S"=<㙠X'%49С[s|w(^v+R`Y)nh%<.(chBc[OUWvB~t>9œOQ8 eAx›W {Ӱ̔dś 휧i,Ya QKea\mT+&{7rMo@($8}pNb z3ꬔ-lÀi9/aܣ@Bxh[m95N(Rz%j GJRL)oCJI~ nqH R~_ZV=][oǒ+_v'#"@ 'ٵa'9Ř"[=!%Mΐ235_U׭|6Sw pu맯uJUP{Ƌ8rU/"zH3O;luKTLb< 8ƅ#J ZhRϤBQmpx+syx r滔xngt>輓L: 1p7.}OeLeyʟλ eaQaFHo d}Fabrݿ#`vWwBI_?,\}06.t1u:b}./yTcrRiXH e.#I{䴔/G*+F/^ KPΔs^z#A9=Db$R-wg| -mNL7zкkJZmC?)ŽF]|xU@iDũ=}BDDzZǏ:W SF&H*&L#Jh!8ezݣA^`uiGATZ×Cd8XkӅ*Jt˝6_K*n7^KzMurhl˙o̧>w\`N;(_1 $-RLۃ=Kۃ=ثnVvƨHN#AZ%1D!5*g'DaΈ G0Z÷`$fgIX o. f5GbCI+n.[7-Fv9C;s";A7V"Y5G4DVKL!2nM : VlOt/-3ɖgZyo幣KlѰ8v҂YL+YDF0k< `K l:}A] 5*)fcIe? W#(^v^N ,ӈ%+k N 0Q kVP C>t!`a`MB= ##B#XA<yeSb*c!#05GKVaJCխ୊CHAip I(T+cx1b1HyН[D FhZfir]%u`!*˒+}f1'a+72—ȇ T!Ŏl -n@P^SElgcAZw5[+J#@=9 @b @z ,#V)ɹu 2q_>N *-+q "aV??zs<yȦqYXyh@8eY9u,|Qc8A&yo;Ĩ1B"jU40)Kg4i rǚe@Jr\.Pc6OB# W6JUSKM9GRD\ #xN(/DAbGQV&@$j% )NR9\RWڛ_>ηQEF R{ͯ U%$3+ϻ &-^COqW]8pq7:? x2sM EB dyw7!\|$[~0jLugorozÛBDHXno0Ô55_\m9rVJa",9[tӈ!4EXp)$q"e+ +R^jTWTk6{8^Z*N Np` 4*-FCN!vխm1u*YB{:RY I+O}5mv ɒ8o]ǯˌIlCǯ1z}N@]NB`(P2%:|<+(SUY XRLg:3٦Y{)cxQT4;1ߋb-UT3nfjF6,duc1Cm2lc <~^j|NE9_̺VAF{yK'-j376(_g!V-^?McnhYxF!DV?]Huk{aU1ΙVY@`ƯF;;M@4g܌E9U3.pL*2:o7kA}gv3+SknVsO]iU_ɮεyRuͯu*f-2ag8{#(Vm܀~zcB$䳺{k`ʭCLym%類P@BxqʀcuU`W͡f<ޓfa{7aV' S޽7LF%k`[.˩QRJ'<`p% SSaWXrk[NzG"ʾFPtnM5RVTCqH䪮M_xtՙc[߁*v ؋&&҇EGӿ'ag7p4{zZ_˝|53I|ϘQ^<܏¤i$Q0آ鎲O+~KSZW{\NޚᲭ=҂Vgf5 +'3kFBN\Dr7KnNJ='83__NŽ+S_$cUb/#M|Vlga̗gI1O|c1C Мa t0֗2P%,6֩6O1xv3xdwN7Rm&L^t7J294tYEx##8uBMauz]hh"A:YXB3 VKwH788xh79HyiGBRs"a`.܍oGJ?uُ=d h˶O:C(Hy) ΐpk.8t&X :C1wM+ rYLف[( j.vc*_%a,E$!B; ʚAHd)(uRflu ?"8oŜoʅu\][[K k# 1a$ .#i$<3LϨ;Nٝ YجV6Ld)8[5.3CzBFr!49Mn"ɰӱ'MQK;8-w aѡctw31٥jhMQ@u*LtRv4щQ{;z3sD78hr-)rLFF>_di{Y| T3"7\h 0DcbK%~|򢮰D<Ft]2KkI63W >~:ss&(O٫y8+W[[EY# ( 㙏XJWs|F"hiHO{!}t{;6r=]R8?wf!p=It5JΙ%:_w SRY;ooNRbF-yqnPl-riZ,)UkWl1^Rb5jbWH-b=ozf$ds3J9I9pTb%rV|v=3!#7`yNvJ3G!ÚGdT)'xC.A+=\O6a =n:)Lz¡[N] XVMUC Ě{^@ʤoߒքsM|Os[0 ב_hZF.8+2ؒ)Tc~J?W'[1@6וzuedf5]s/WR}3-&MDZg]&IXFyMm)L aeS-A6Ę~ȁviw.-f.}w,c%[ LqC_ #t4De-ѿal!YeM=<辦߰J”Yn(ߚ)ˊ,TuR聲bT=4JKUM gk RЊeWդo)Q)AC*4oFS1%,%#z&x-bB 2ZB5A\*IqD*ΙS( +EWeca jcbc}GBВ}YdWf\irhmmk4C[ջaj~{[ںq{4 Jxx=M)Q]w~p/6 ) \k+DL(Bb4j-]”+Ή[IS8P)ɠFﰣoc+@{Zʠ.X>|<|6Z7$HKK鄳5.u@vG) T\y2k-b I*7.a2jy+xuyt4 *#z >!%̜w TM6ЩWQ#EE%$O؛y+ %yQ;K}9L7z: NMJ A2m!+=L g.oȣfZ'ot$ROGJ%B稚_aMawUyǜw I[*"/RoW}%fILjl*0H'Q7IG/:J\nFZq  mX7=JoS`&_'v)pQZUeg!.3q%ldqwh~.-M)!t`oW)q)NXr. C^QyhcĹxϻBYB -r\ND畿ty5lۊ?CDI90vn)۠[s&O^Xi5وsF7cr4\mSg~.f#.nZ֔]]u(ֳKЌt[o+`)ٳ1/[p>oQH,^*̨07i=)HuoB X&=ђ*_I~}SaTwy#IM BsN)3fx!%6qãӣ#SRİY "r&Hd:V^#fx]2J%}?AR$FKa_ip,.k-Et ݅nb sm 5)ص0qn8mIN5_=-jPy3j,7}0Ů- _%mSR(L JUPIao>59S暞}Uo>y4rPaDz۞ 4'94ϭ@~ @Jq7З\v D`fkooCN[kėutj\ BZY7 (dzGt~y'8<azx{"S HiNhoU(}G8D3>;fS9N;9a'~oP :Dx׻LMJ=wa|h yZpJ맩Vϲv8~=k~iB[dz/P~xcۍ&Ᏻ'Jj7GT&GOsB/@bv~ a&!ʉo(@>Jv=()~A?$5/hqy"n=pf,Qbjw}ҪGr^D2#M|ܼIjZ.H:mMBz;\}hnh| z_Gsleb|C9 +& :%,W軖תAm9/H30x{(f'&gwtJc-iԢoaG)S+EU=X WspUp8w)| WF-,#9B{Z~+zuuUϹy6g|r[ Iq$^JTE ̡38,H<Nກp"] \T\Uɍ}7]np'בp+&_1aH|3y+ReSʩKL i(⸆i+c JdJE7O^0ۃĄ`Q1מw \tvkJRzq(ܧ)ñv3B> C %kN3*4ۀ+_?u}6RAerlfPŁVy@U0mO<^bQ<?m|"0)`Ͽ2ٝ=G}\0lۢe++𨘩}z>R\ɠ̤1㺎Ռ/ylW]m%La P.Bmh{(gd%a=bMZ1xn0)Rݾ;MD/JB(P.c!rqH /'7/=ؽ*N86yEse\PW]vL˛&$WfUzЍ=8*d)1Wc֮i ќȩ9DII6Is))kjukjq,U5Tp2WeK]xIcTըID)3eKf^gI6kl{Ɲ_Pڳog2[)sQP 4$ӻZG{eFǞ8ϓ-W+#͕)=Wjꦱ^GZhphU* zS`a w# GQ3Okռ;xF/ara;4:=`f|:A\ -S؊P92m*l0[qmqrL) lzv= @8 p<]O4 ie[Eŕk)AŅ*kQJdNH9"⬈~;MD&"sӐ'"B;ͦw4-tBr0bmw1}}&]Olu o0y+@N@x׷$|͡/#}={u(6ŽS'8_ nr=8OƇN6S9NNwMa'/پ,`%UcIwS|ז,sL /0laG؂_I*(2=j#SX̴}UHg5iq;p1lCW1hڻ E{WFoj"əęL[½nMSDp3FX hhBeBilU J [ bj/I0:*eI1)Kn;W˱*Q?ɜ_m,d^uxa:$ӑm6CvQP XN -]O:Ωdbv@籠Xp脚 M8w)a3eIn{{&JsEAk 49g*d~%AQ [ĩ )M83+eÇf@??={=x.O8%㤋ݩH{1k^j2gA_~XZYI։\>J%)6E2J3$5Aъ`C ngdNw2#hko)]^1٭xQ&k^m>m5uGg.ig)C]^1ԭxAbF+cTg:I:j7'&,x4 5]٪Ym.򽃦s1'ִdT(3 5KVqZۈKX룉Hm2#M).3̊5m4Π5psR}[e_ǰH^F4UVnAule'd+7Q3BTqG֮H2"t\;3wܛŘNuhPeNk5m4y9ljl؈vYb \i]6c:Y&Z*F!:u|q鏓 q.29^!.muSNY FXP#<.<.2K2g.3^'f언fvsS"+IE>uo%N6=uhG$j&ء1Lˆ;LC+s;]͐T< N i^ ޵.];؄+%#q.tkhVz[4 N]8:~Έfkׇ8hX(ZM(Ny !Q&txe Ԋn)]\w]Skɨ_rLjy3)kS"U}rgTIZz[ N$) \5M=#lj[*Z:d uOJ@͂l!=TiuÙB׆"Pv)~;p6\6mNd< W46Mbe!l<-x{ApZ9B>.jS)s.̜zTUVT)O[Xrʮ3$ wK:20b 2b۝}̊:TJ4Jp Npӯ'QMJevծ*e@*O@(10+ "F;'>Arʉ.;=nÜFA C& A/`f!0C#B}01ZQ)&ýmUU"PƜs; `0Cf"\d"YQDTLP?"KDB $&`Mw͌\zČDI8%b" `&Pq /YcTƊbC~G,.Ĺ,O. 4s:$V|]$pAx#ogA&fw54 QԗHUcZYOvc8eAa"LeFb+\$DaW76cZ(3bJe"&%*3 Ɛ$w <*nf$Md7]EfԑDFBeh*,TrBrY]F䢑YB%6)@G`䤚E"^H>q2A7u31ŋ]h!kY]hZnn>0ƦF=< .Lbx=?(ԷDܚT!; ^X΃B|%Ft?oIvk B8\p ː\C͇l/GniQ4hFy4(g+x#pCz©,'-> 7'ekv|kw^\f8QKdj932/fԍhЃW.1hqC8 Eď|XwH#5]]4K7?>}a;\ZuBhMO(Ex}@. _K *y"lE} bʢ {>Gij,I@ "MM.dhouy_ ؎J+LHZ&[#%#X?9>+`_Dx1VU΄]yA*Qٖ{[qؖto{Bۚ7B9op >o=Ύ>Z!8EIZ2Ǐ.s,O=0?2SϥZK]ϟ8ǀ)ЊG FH4vv*;?QFGzHΧl0(BfJ:KPVAahF>̘d͕*L=I`c~ϴ1v\@̡`mV޸xv'X"0W &jHp/T(i-U=Ef ΡJT\7wADRٸc]X\=9+tb9!' g )ɲ*EbZ.yӳLD}XryN:FZcf#%•GȳֺӬE %teCh9@Reo/&|:naYјvJGI|{XV~4j@l9kx癵b>jCJ5m(4&QvEǻq&AF~:F<=o]@HIB;Ҡ]:Tulِ;ɪ?)QGf2Q! 9ӽ5KwQ+ 5amYo錙P<{rSRO/xj:`; I7/l% {鯟MH5e`|uN_ @Uu~c9ߧW{x ⏾M˗7~Wi@﮲Ӵ.no~u_o.^R69gҙ~zm뎌ߞ|{ٽ{M,I_ym'Pr_?1лÏ0Ak~t=tƣ;s;gP,W]> Tzu5(ޛY?HXO@iJ%ZO4T٭}0hJ{hxFkL9SIɟ}TLлA$/Oɠ?oohms O O=KMi-mx2/%`9CW7@NTCU2ߧ/a(>E^vc>[3Ӗ~b6bJ|dƏMXn1yX| |ۥ?n;Mua0xۿz=e%^/wοPA`0v>ގdl2ut J@>23n8!cLO;䩀B9;wË`1dΚ|'PL&oV`1n2xz/6&0}kZCn?OpjlLʥXēYּq|ff#F><21dcR=jd}|v6EV&, L0(,` k I:i@y^"*0ݸ`Ydp_7,{ųXpUrRO┺԰.N.',%iu,9HC t^;bc]jXyyF6)}w REcN"seR(>}UpsD)كo!$dJ!B_`1RF#,EL~HRF;`~jq)L *'a]?6R|A:~- Of 21_n`4RRt|Fn Mn"_sh1Qw ! L`^8 5P0 ؏*P3P8Q̘ {۟젿;Z+54MUNFԻKZ ,FVx5L1Nl?\~F{T CkL7l6Վ2k/JhS7L9i ;xu^k[ mw8 Zv/S|*(iQ`v/?{Vvx0Za\II<\@ctFDd+ެ˘،~/P\X%q:HD9 -qJ7 Jc{,DDydזB[pq}Kyp7_'w$'\ִtz 7圠qq{:q]VPYbb׮ǓV&QcqWɬXUEpeo@Di,tUuYEts.?5eT5]=.мigj@ФHǥ7!r3XiLqqwv+@&4~"+UvpЕS|hA0E<|g܃1MqAa6%c8f1>f.E< w,wt<(I(1@8|?`}ɐą{O@KUJ"% O@!1iJ1Jްimx( {svx kB$}'[F M`zy(`FYL*("1ur.8( qhہl.Q΄܂yhѻ0E!!z"³ׁ\aj#w.Ls ؐ&0 2@QLsKboi& @n60u$#6)݊2>_\ʉ7//6eq<%\~hJ2Ȥ+ηbA@؛tw8kֻ';J^eAW#LI%Ͽݺ=7X%{A'IkI`"Xm)eJP~L߇lV՗u(K6PJ$vtUrL\?=wIra){W0"8 ~ 'ۑIWM%)L0l"zYUM!rs#=E}gDv_:ȩQ[Nh/bb4( PBK9aJ&|eɺ|Kx'_/ǛZUem?v9$ Rh.".Rh_)@S u._>s3rvR wR_m߾ :a5A~1N@* Tsl^C>!. hEBs9v42;l#vzT08U4jTOO1'DnZo>&C{#L@_w| u7g0zsS hH K2ai) a`H"`IhfKBU,) S B^˄H7,F|Xjgj'"@ZEBlB@4J0M,V^2%E"RWKI~X `?qtI똠|y#fYg)ER|^S8jSZ&)USkӏ|f"A_Xgb6j7@Trin+t˕,OMy?܉@H|;(T_X&h}\||Ε$ܭ7S秷LL_*=-e77 ܾvx7M+JGtbN){r] }Ȓ@X/T!f+ˈ EO !.v@-rWV흏uQ5}tulՔb5ȷ76V3~9O(ǐѳ+p}K׹vf+NI/t]^J$S68WG0 QF520h mI?kqWcz3j;2@v77J3:[$eSݝFvzcYeYmgڃޓc9o˱Ut &1,1O#FOQB)BP[Zc%YاB8Tcx >޶ M'$㵹C%1_Lz-pc@zѰxKn% ԇOa̋{707h+iCyVX7jsJFSJ>z伢b=faVv~~[~%B7\_l#'5 G9[mpܽR0S>q /!Gkmb}˗nmUn0%uh_ g\2T[B=Sy--FmGCTy;\d@ى˷_$ҕoFzI採;{H_Z$(/?Goy~_C+ Nb}sŪNˊUz7>Tn2Ҵ\Ϯ򩯠;܏F|~v <ZCrOrZw\^}v֋1P.M[S5,ྌS` e).&Z]zvM0BM@-avkc|C!R݈| GҿOYB1'Ƹӣ[buyTpxpFܕo&z\c k}=۪ҕmƞ~@kmEm;G3.y2yY LfřQPft~2ذyY:V*>B+ALh5Tvꕀ-50߁N j O3  W I>x*t I ^Y;#5,$Q\W/$ed,і %vT&kJ.)28 *  "7i jqf}WO"=_ۙ8:6qzwь3W\2'6(]GAqaj b尘?,'{9\LC%߿lBcWHC*zpd_YyVOߝ*HR90)2#w W})eC߷I S:M瓾Fe-Z/UTU_~^Y(SU\}lQ ^sYsi.c }͡'Kv 0=yLW/~82o>4Ϲ:jrx&.*64 \]!T[5 |Qfw<]h:ykcqKڝ~]ڜ]# r[Kt7JE a/@ZXjDDKoR|PVEUUGWUJS[s:z>tKh+օ&g\ Zb:ΨZiVUsm.`ⱋSi^crNzubqK\zY-5$$wrO(}}"KpP;Bӻ)@݀91 uAǚY >ެPNK&FB3yϤ:Tݰ^(!PGR=Lt>SÀSZq6hmp]c5WFL:jwxnϑY'HtU hg;wJREX/}f2KNǐ(>Li@.>Қ?շq>im)Aȧ.R2*F>:fZJ FYRƼi/rSfT"4A[)#V1NEϙ6-C*i4hd(^vN 4T+SҸfC0 V r y5Ǚ9рWSJX lIplBX}oo 89A^SXD}!FuE d\@YPE+uL"f1)M#I\򎤐(Wp#0'/9%~BNr xZೋ'vfN+͜>sT`3iە$:Τ-/\*R9MM߷Iwr}߲|vJYOdpggP2eTOɔ Fr˕d).G= } R+*dgi/hLN^dفHRɊ*Sm 1#$FN%pN" kfܲ \": 6ZǷWO!C0뵇X,5>_ z\bJ;?wj1gôGu;'tZ}uE"-7F-g8R ^ ސnecZy H.jޭ/jSjt+y{T owyOȅԗZ}]$Z]k qDl%Qrp+yXK\") 9ֶI(WwFbkZخ(VRUN$IB/AIyE ϼLbDiI} o$II%Hfjm,fEƗWFFVXrMIU0D0AX=(:7Fx/%҄&QE*H)b5Pp#Q5m5Xէa5|}Yý4q&z@h-s#N@1L9 :BOkoqDȠ_)3FKpP}A&XBIB.?UZ*Am!嶐5L֟rz4",.XmukEEk7uJZokOa]I 7SnޘtZ&JtՆ?ShqQzHA\`6Mcw7{z8An{ާpl\s)q?7ov]Z_eү.βBͼ0-fv\;psX)S$˥L73Ej1!_ni헇|)[1hy}s-/QV!A2W4S5v:4r3-7 ݊&]ɰUdZmpy_'\QW)\Wv^1ݟpv ׂQk 5fElQ{z.*NqpzlkQ[{Z]δHbGt;ehΜݻT4k`vZ{8j#>Z<59֚Ek A1x$De.gAvLYKю|?L'r:<*G hCP[pD S Q![PuMB4ձ)ׅP&рd1y+^ͤLo4HNDRC4h/"_"(NYu;\ѩqcֈNθAu^^V:hI^j0 K3QjC%}VgV3AHay}V81&)O]|Vz K٩D˧{SB!VdWwA@Hoň,u bٹ#J}VJr WB+ h6[ϭ56O{"t){Z+sUZG"]N]#|^tHGfykҲL1=Mr-^4^k*.[q?w'tE}9N$% QF]?mxo@{c**(,ֈBA9X "1^ɱ1##ֆ!b$+ ,h}EHd,u\Z)%z1^=ΉRD䳽őzNǼy-B  CH"@rk EM>Қb:Q/;=_ΛMū:-(Z \.{`UˎkGB_x:bO?<ގ'_#.ũݻ?aYH*iѭN&0VXx!#\ $dT E~5sw}OT\Fˎ3t@Yg'/QMa%3PG8 |tyWJ7Y%o,$&Sݙp4m3:@ 9R8Pr^FP >s^PDHր*H55R-5!5xscF¤uȖ nk 5!lxZΔpe͈`y `aFV0 C.:5(3F}tQ3Th )=weB<\pb)B`BG `=''kd($94H:9CdUy>e XhLƫԧ}N<| em)c! ԣ˂e׉)#te =aVCy*NN&%"h0Yeg E7."\uUI7/ wwɫ`Ȥ?/.1^%9[ H-l}5H Z؅'r*wiZ3^MWoH/`8Rfߟ= "3 g_8cA %l/'J>[4gkrŸT'qsIiZ6@cAQ-kc4A$|$h`/J.0UVsFǢj)=uj`xz죜`*xu!7ZQ%:_mHriE ,~p9mՃ# TN98DT3UhaP qFYgƻofzu;WW__pzE3AU˓*s9#l<Μ<$2nO퐔%PmJ~ PFc$os\qJ6܃l&R11:' y6$P-INwAFepsChGTb"@+&D LHƑ&Sޢ.׊ga " FS" 8DdRk"HG3c31OuXiCqODՄG iQe AJcP !\⌼b6'0+"uEfɦ?]|aLj~>9IaуE Ė.Z+wAeƝ$HSAZ0n7-@,=cTU PV)=[ud~_?#ߩ,;yC X~Ao 'l=jUYAcŠNCBo&94Jv(Z5'SW֒t\8\~7=<:~lK'hZZ;$f`wXX/vPg{9wG|m?}pα;?"jw% ;Ȏ!@k7&9jlĴ,L5'4wBh)4@\˧ " PnE>uD .Zpcȥ]͠i`TOBƚa}tt4L;^bä81`QX$x K9ĸ|nk Pv-uBЌ\{J,u*GdF, R .mZA UKqNvwyjG[{gGD^/Ke{ޫvkN5$ɝ]>t\:/oi h\3% qE ZJmtC#nuqn-MBj#f6hb8z> &SjBY OY@ֆ^ˠƿBkr O`8L}yǛ;޾7b1Yc G t=u/=*;Cv=K,n==}:[On!f]bok{u*v/E%K D   _K0͡^= T[= !bPKΉC0T(Bg΂BTY W;3cJR͓DRb(W*NR us*n wU$٭C;W%u?.p?! -u߮-S@&2"qM#/g $Hi$͵N eC*v5z툌$#Liy5+ 9egd`0p~-nt˫p6Yj>*d+|3Mv۝s[a؇0f6%ӹtVufߓF)+`n ]_W8'BH v'Hzf֗ )ElFH %4lEnd1$ʹ)qbA}VaۘӻBc(t)ה ORAHP:v*t&X (a'P!CMjx奩c:$073_kͧ#'\f'QZ,c R(ֱUSZUY@ &8Ew ~qB}^IqqA.3z* ,PXdg5]Q^e Ipdԇ\(υ9 .$lqK~ 1^Yx /FJq>0=ʌoX~))mm_ptA )BDݏ#2# )x߄#.APOb譗PR۽]⠖Fso`@woq/`u_?k5v!b^:+NgƺHg9pMA[ /Gףּ53 O7|N#!F#f8DHeȨOyZ!Cb,]\3p{o=sġٝ8d|8s2=íJᅷ8{FۃZϖ_#=q-!B+ِ`7mW$%3LuLBb &o7.WS[tH :=^Hf]tf;]& }:8o-^C\1K𗌟A-ϪkNYב8abtfzPA8!Bsmą#Cgxk8WK81uXKbآxz, "@$E.^IND髰ܭ&0Qlmej]Fw77 MN5*aAAN 21L)a)`JXLX W}Z{ ǚD T MRJP$: M`edr/WWH @lkBt $ 6QrVBN!fBhUpt2ԤF79pYZCב]E* eÑpqf<0E: bR1 /Q͡ʉMwi*x$Z/LpP0my ?^v!CA;g6n1rorCFc Iiit+Ia0Xx λ];Rg%=D: vk+b({fS.;S뭹-(X7cص:olul.AW*U)W3W<(WߎAĴ!o+A'fKhju4WwYCh'ms'AH]"V$`HGޑgZC7|T!EOB z9p@* _jH{ g[V|ky XO3E yG+pjCI>HbKwm6 }O0> &Sf|s?Ļxyg?\CLJ2M˫!y3iX!ap 0a?ijS@pi[YoBMHCNjtgHyWJi\wչ>u'ً)%,\'֕xk13{+ԕbϝ9A6`H$Ɓ |`r-JΪ՚)aR ˆt5ࠞk0@\$#Lieg{/)dH[I?|aI$BQKa4^/Dm%U*I F$i4J]jQbeSJ=vUak) uB$N!*)⩊Q5qJAa&"!vPSaeH2Fy_c4*!JVH3 0M 7(q}")4MJ b53„#P룧7<)T{j6D2[*bL܊Ylvl Z #%N=" 1Net gˋh齝Br3.dEʫ[/#EFGnB}ϗ VoJubVj4L,O}HDd8YZQV~Pz*%ߵN3Γ B9-}SZ}  r1]0Ȓ PD: ? =gHc8Eޜ"@b.$"W3Jo`b0Z otbptD.V`缌U0at^10"O~9/pؕP spPL8#z_!] c~/4tc?=wd?IZb5Zrj9[7ˍ xws5\Ud|z7p d6~a dWs2M~Q_3c 9؎6Cq^ǧ?~ͻ78*k?M]ݕ d DQ؂oH"!FA0&W^9?{WƑ /=!v7ArЯ6r$ʎw~$% !9Þ! tS]U}τw?yc? J pFk &hp % LRLd"pA$F9cqVc_|~sز(*t$*2ƖrgBI* :iXKK%^)Rba%;>+jޥuiѺTi1rSvpy*AEspWz2S]#+Tc $c-$ES4M@Jڂ&ƣ̦5XBc1ML9V LI[WŪk-UT\~)PJ0{}vYpJdX.z9Fm%ȁtZȌwD'8"D($?'8Ex0 @+*oqV޵`K9?8tdӖ[kS `6jZ:5яp^ُ¿2~YgMmF_Xˋ%ByLLJ_qO ?Q>"G]j} fr>Ȉj=(ME0V* Pj)y>f$9bat#TJӻ[|@|zaihM?ՂI(|C+Q"b^R̋\yQ-ŬʇB|eIk7s.U)}@_ΫQ`SG@EQ:|zNcp1JBU\5)\ƢDWCWGp G3zr (a87b\琢RZFc[#]Y1@'IA섴ܔz.֞ TwyrIxsݕ.sGZ@/cפC!,>cB$5B)paXhTgz6C: EBiWԔKF-$S- U!dΡ Zz!E 9#O@ʼnUQƘ6*kD~m7tb΂A{}mvj>o>}ٍ_ë͋ˉ{E+u+}{uL7>'UCTab/Q+'RQjd!m)oNPF%k/eYgܳ$RX0cԑr^HڍU_9sr ?@QQ-`+v|伶Z&it*Mm7Yk9MTIRhFe>/@tTF֧]U7tLk$Al {&5Ɇ47yc1f(x\F0o$ zCz5G_Zt>l1 u<86:Z HJRABy'$|ȹV R%fşūǘvvꬬh;7qhhH4d4}ھ˻7ےX\M/^\ނaBW2 yOdlc*r4TГeh KJg wLBp9GQItG$cDKB!D y8Tq3ťƏag|?̯o.ikk+^L6Y>޾|5'WaB&&翿>tf}Y^~a߹ǘMs맻L`m o^R0\ .4lC>XR0)_k羳CKĬ8bCeF2VNTj5:+$Y!1xT4AXTb>k54!(@(gJ9!dStp+sJ%Q2c7q4=snOQCc*A Z?pII΃D +h~)E&)TeuM@/)@ZT̾ iQ4,)i&}ddYܠ͢ѳY7uvbJ+$j9[GrM$:V#|S:jE['lH;S8j=F ޫblďq[ph.gXj=&qWuV62jlC3a-l#5BNw & LYwpJ jQTWk vjFeNaNIts}5Z\9GvC@yuvK"bjyYw}wrǽ+ |ϛD.t9yB/u)>fk'ZqyOoSg{zoRۿ<oͮ<]y\zCqZa<# EH}5&8 BbDtJhݎ/"Oiڭ EtoJIʏpz !8usQZ7Z7yæBkpj F`'Zϴ!t dúfPs%l )P-ΜLIT (NN[`R}ky4;h8qXC*y.' L@3@&T!FQD+E} j\$bx騳xmQM|XB_M7%XyWN]ZQE i͕HZsff r.Dh#ڥ5 /&9ó/a*84SҌYujeRR`XZ5Z7gr$eJ3Nٴ$%r^-.h{w2[GG8d'Y#V}T^J\9^xQ\E2Ώټ\E_g 5z'(_:c`z-(ZuЪJ vaf3\iMyEB삗3މc@%(x5AIopϐ"F՛ܳZȍ{ yR m {Pa8ӫC c@MqGKeRi7뷂-8T"6ĤqyWw[AN+[@ Rq,ml Q8vwdL<&0^z}&߬4iZVآCGJм2, R8Ln!<֟ ǩ_YxvAv:GO͠]}AJxnW=⇷iM;HN7<~m/_=)uemɯm5·mهWyA0ǴtHK{F3jf660Ŏ  \zeҷzJ}~`_?8o@qW_dtկG!D<4^zڧV,ՐX63~Cs pQaL0*g_LLa fX6IGhB"u+Gu$-@u0(O N9fx ɱR޼UԱZ%S.㗏agoW䴒?]T_<)|ѵysZiW?ӑx?NB~ +]pZȍ`!Ed&au@ ;8(E򣬍" 뛯U^p)jxȊ|UMOTΤ\YʁEd"BVA38gu9,62E2z[]Ύf.={q8~hzQ vVMTǠv7*,:X(Fv^ Y{x3I_s.%~<\H088wڍeHDtV+Mhf)vH&ybxwFv sS8ڈ?o>aq@\~WF\0S`G1?qZ"pjֈ-˵ 0 + <AB]s; 96i{bs ;ueR<%|1&9hC;Ԭnwq̴gu3ڔsB:FmmU6)&=XJymZ+b'\ӖYגjςkmZN@O$#6%- DQWI[KT+2 %/747VjȷWȒHTD4%Q #Ms%ACђ,0dл%HrBKAuU+K"'֦ΐpʵ 0$iDѪBͨ =u/EQ`lQeLB4} ηmL,g@8hVҐH44 9_w9 @ Qӆ gA>2JKmEޚjBP (5aL%ǩ]+nj?Պe%BQ :"˘jq_Y3hmz{eZvWpFլ=XmRB)ƓU-\[׊R oJ,6U&Xu-dLU0eY~q ^P})=T% ,xټ-VD.9(TsVi 6ހwh2~H/.섞%;MMy}[ 51-޽9xZp Iۊ-Ad7D3"r=Hm"O\$vp6'9B"!8Q L%1,~+| zBYlhAMJRj0&h`cA"+C`ʌx;u)&$>M_a4w7o\-F0@Dj ^ f]Y`ht4.HOMk>e9s? tzu ]L*h/6;2g泇`tk?rzs  :37l=66Em !z>[J1Bݽ9#krb3+Bf>|f0{T;ȦR'L?|"Hjkso9:$Xr2)h(20_@h,#N{g Yb)) j%*bYց\Y\4-tݝ]YjKmC0/H~I_87nOV)09'C6=RRvĭlLo!Qt:&( tٙ䞂 aC;yWJ'd?Jc%gKHRȧuw'N +aͣ{𴕿 λrQiVQ-R)2Bn zęMALL1-PZJ3oor޳E*bbo>"c*i$J5&Y4f }E.Լ1uL2Fab+3юDX)J&VB[V& M-u%h4_-=3ԥO*CY,T|J n.cj B5=*kX .AKZ )c Z^P "l `۲XD#CrfD>]U.ɂPınN띁 OUhj6gaOf+D͇/O ܜ;}.v7J kyؠ~!!I^\{@n̢)a"h>wۤ13jF r.UekBnS/VJe`S"k(vy3y=0"4Ic2^Vfxnp}dt;opuTndO pHSBhr/+7.k?ļ%u3Or~a.˴׭3k30 ƤRK߷ΦOpy^eLhE`ο1׊׼@ VnW vpdiZjjj(rA qy "(KEq%Y*JsPv9݂188B=98&DK 'pb-!wG#0x捖`(Pk['FzT>DuK=(}1h*A&W4D&Rq-d/Se4 y@1RN;9#]`P) ;]4 B B XW]t:dYIvBn$e䞟Ln&c~MF]F[n40gaӿV mh:3CfYa@Oӏ`>]%4W0]R{6H7+b6,cEzG&=G_+x$+*2%tKq=vAѩ2V8ܴvh,ꐐ\DO(^CIvb4%iUH,ATT W9 a]gOFw_7K[\w07l27H$|Tw6iYDRP'J)t~y旷jtڿ@aSrEk`J9Mx?Z$+4;0ΪZe ߆cyg'S2݊fSW#A1]pΪS>3c*$:>(;>o R|db/I*XS a<}2qmjg ъƛޖOH0Qy7{׮Aȑ ;'SسOFYj"bgQ`1Pl"kKimrEk7)Ӂ ǚɆɐLg qc;)b{L9;eC]/pz\LN!l%X^ꚵ9N (8g]/gECL& R?XD~B6_@LjL_T!DPcVOeSse[Z{SEzfC~J}}Xc%oо݇nj1Hr3BkEe8c}E'T*S(onQlvөՂ1=K32őhtS{t2 O!UJ?͸8TBQ\Yx}"u}{$!S2Y [;#-Xjifx^:6˭EsݨM}?;C|S!yǕC#rQmISz@$-%s6B7J~Y/hɧlb6pK_q/-sV_Ew!͂t_Wj2-THݞK;!6!>c3 y/}{ѰtJfO=/~x*eҐsuw~ 0ХMi(GPn6m0E&nWyII&F,p(t""B8ZHH:{y&Ht!Ҩxa@? >u8BOov䂯Ţ10]KGmIӢJT'3D5V9'rk)u3>&~uk(@dc 4|ۙ>ckQeU[* w'd,kG@P o ,38?6ziYnDtj0@}.f⎕$ARk9V:1H/&XQVyôwd4ST`=e]:pIEd!lB4"( ~ĥ(]N뀙VoKHF#Dmnys)*hj$U#<hzpHEny5&+ H_(:|>2~|&"+dr J!f'4TH&Y ޠ5~NLW9s,9I7 pm :u޵|RKZy[y{&v-o;i6쪫W$+ @*'n'\mCd/vRx)w l g{~7?M7?oVnmջφnl jTO,E|,対m3r:6[x|G#S`$$ #+I~P&RYkj,qOc/̶7ʆWۇ[ߔ\οCs|hVOy:_"?Dž,{)?Mׇ߶XN_?j풵oۻ -*!~Ys{u;+ꊟo$[`qky LZeX 8_[/oM">fT8d [ C|Z*o+вeP$038~rTw_}RJ2*.o;/sj7۞JhhOͪi|a wZ{P euDwwIb)_':ed,"s BzA : aoyxAˢ׳NwOW 9yE"EjCԘn𥉉y>:t4W_ 'w0;ke%.Q%eۭ"뢍CKgWrG&d 1~xQ~$\DΖw=Q+2uHVj]idAm?7Sk|5Vk{he,5ܧq9˳5Ԯs1%H;aşGj=#F*2헐-tO6![7!N6{#%>vTϰ|;&Jbrd;bHNDE-@_Ѫ(R7'+])j&ER3tu3x" uBQ?i$b<:?WVዑ]n]܁(vN;Q 4ruӎ\!DƵNXXfL#I9Lhy#0\\RڴvtXnjOR@PR,HuVF9W92SNkt`5pߓBE g0/3#"1M6̪~?_hd-ZFka]DJ=aw2Lg2+S%l\ho>rV#k{g(HUhVf  A';ޞW׿{=2AFG BBEܰVȳ"jY' $Т_/D/5ʫq5ⓖ6H!%1 aD r9y |j@)h^0&Ù}V`5_öJ p0`9R׮7W;m$yS{ +7^;7suQ=6X[:HEdo梿 SH% oyǜ!Gc_Ӗ9BQ"d7ni /}dn꛿T7>gͮ,/#zo"7ZDkK ,c&{ezQ5>Mll4K tsYt,F"3Y'zYAJECu18MBr'm'%D2x ㈄1IʓT =T b z}XM@bI Xi{!:m2(95,>I)Qǫ{nX[X6mPgE"u^j4Jݡp,ܰ&CLZsƫZ Z{`X#ɬRL^ ҂ ͺP赂Jq\wC-Gޚ,k}oeBro,[?q1+d]'Ѹڡux?8]}^P+!t},eBl|0JԻá/9ldiQq G<CJ6RP1N[nz o =$] I9WHԦf"MQmZѭ%m`?̼-vRE"_ -@V7..}t.P87\ə-F*A~lz‚/݂T{EY8\}*Ty@L[1o"P7#hy7PM4NN'rP٥s~-XE.N3٤m^ S,u,kU:XZ׆* m+3 OFdvn '撷֐:?ɬ}1#'>m=蠕CkzfBq&4Y):>) (A`3#)&Vc-k):l<zyx}RO2M)%nMtN 4A[dzΟjF_?lY񝲸!PB.π3N).}FU`ik\>,ŀ1"y20w3vViauݻ޿}߾8V~LKV~*(τbk%&J2# "KGYG*AJ'4?sB\_fT!Z՟V9ߖV8Μ1rF>ylTAɦ#buN5[rl%&\PgX>cPd{i>]oo e_]u6 =S=m,j -{|-k7t/T16j"Ss/i#'R+b.߯*{;f ;z*]c , |zT_e*z4+Rvc$Lju'MNޓXaV!r't薺8 top29g&։sM٬$t ڑyQbze0 S%^#>mh@8\h=PqÎ}¾Ù(?yg4षؤ* e&[~mT҆Y)`ml5d:%]?Ws|"F9KR"1^eJ1K<<ĥ`j͑o֪ǹŤmgcyTbd½)ep*Gv9pDMKnO]|$iDcL6֑~&:u't4ð-dZHz;V+JVqBFHk}Rtq!ZK/ sv 7K.Mv8`$YU>W>{Vtmfe'ZF'hmUxnpoʸYoHiJZ uZmᣍZo?%kEy ّl.RBSr'=ŔíOWF2*j+DIT:Um& AU]mh`#IҨa .g9:jvD*XɆՃg8$eس4?gH9Ӭ[N^-D1{#oKGp:<=A hU=dov2Gc|d+c0UCWi)HT1XmybhqVޗW6g QxsEد8v6ϧFO@ DCEq0yZ{WMOwC.`rN1fX\RB4u0`-Xk㾝YE/J 8㥈񨨁g,D1#Svh0֍lh],P&H[$ѤpN=BIP-~x=5'ɴG?ٕ{ 3#03j1ljH.JtXh4V;FI%aA&F/u~1gMzbY1Fb5 m4݌8)MLq2Y c* 3IߵO ^ br!֠-x4ꩢEO+@zjsS㴲1d]xbމvjݞxaa!Wx&=[g4V_PȩlL8#48ӬZccf \IDvGz$FGmUzR(TICENk`̔U&/ I ~-jisTk"6e3[FS5*h C!xbj&}_,}4hyS2}xL@L E-VЯx1לo1/Q{-ĺtkEgkV Hy--eǧuyIy)A`(ʭ#QkM6G S)0]Eރ5LLj-Hp3! /ǑI^Xl| X 36ZTz$oP#KGSL>:uqFX35Ǡf#q n3xq&=.J@3} qDLBzc3j+Sr)@2hHa\lS2杓ǸKP*&q&LE.o}hvINJfhޅ|ƻVZYh06c q73ǒOqL۩QpLF}H@"맞kB=e(M*DP].rT9q-*ɾF9e.9"x ۠n>T3|s~ IZ?o\[2m>scWXNa-?ÖEv_}427d78VÃO_Fq|cwU/)*4 i3F mP9%*"WlqEI\wHh}8r!MKX!8sX%(bA_oJExn]af/;κ:. !@cN苞e_,g=k̻'ܖ"2eBDtJN [ 411'8ьgǬ?SYELLŋGgY6zf59ˣFZq9s*313J9V -)8"sAfjT9q>*w0٤>V#8i8ifbFʈ M(*Y4>"#k_)(,#B~?PڗЇ, $4j;;~ɢ1)4e(A;Rj61P2td'bjjv^TwBLEњTꆘ$%FSx%tlj) O !&a=K[M|)(A5*xFQQ% CGKRFv1a>f+Ee~ cȁ;(Fs-Ep[P*q1#V!Ǥ:ۀRDq-7 yAe)6v Zv:9_)ӜǤ"vӭ]eck(fܕܹFCM0O|˨s8氫WHkFu9éD6]M6BIJ>5W9#ZC ϯzȡ:\}"b]}2A Vvn߄__B$@,M%<ZBz&$ k`JP;Q4(Ս]ĻGms'ma})9/S[&旔Q F=TH JcHvD;v{qΔ 1UcZCL(#g6:k45ƵyH"H p SRfdy3ong?uD1HZl*k8M +%aSDr"[f|gtvymNGBޮ ߽z]@qaa"$NA}牦i>Rƒ"'=Ž>1)sjjZ>d7ܣrB@U3+DkIx_YTfЩRhl)Q**zU@Lj"֪kuoduyZO~0h(H4otU1pNiN,vԙgWdAގdKs4jky_z\jA:0'h\{,;!sp'e_+4%U@U͊C_PM"nνQԆsۿݶvɥ 0=9Khޅ]}2GnAEi]~M=|s,}.e}@0Żgד0'=9moۛ{lYF鍉*J,;b]J}N%~Gf 7x ^i)&k,hΈ6|K?}֮?G?VjHR FoPEi޵B#gTPq?kQ/nO,d}ɬ[˞dCZ=]'Nj&$\0Qjefu~b+X >HzvU0*+%Z ͏mK?tjIj- xZUŻx!ٗj%z;ޒ>voYA'*`(Ba@+*pk`(%~p05S-ݫ byx]WWTMl9Cβu@ΕSYUZm5@K>m (y}4!/j^ጎKB%7.k^42=='I}C>Hb,"#3qՏO' G쒯$# tFp١`\BpKxz zX¢c^BJ &v[ض~z)}1륤qc;31(C2`) @qL$ vj~k,qk,i<%B(WO7zaC Mp+zxE\,+j+i==<3&'G{lLmɱ>!i&v>hԣOpYYDpː>LuΠX}# A"9A>!o.BߴcA I=yD(c\ss/tyROߦa*F6j7hBc S*e\!1cK[7a!n!}HS[7 n/+|O72}1/=ӽ|[hC}e&ˈK*^>QC#Bչ =S%?,>:VĶ]P{?sz4!PS(P_ <͸ |f 3ͦ$ ##Guy4J 򜰭]4V$c;nWηL˱x/c|mez}Ф.]߶|IѷbZncjaͿVfoLGփӯ=?zB=ѷ}T7 .Л/ZkҺ:ZzF$߶3fB=}ѸeϘkEl[ f`\|x00s|>=V$&J3Xcs-X*BTFTdDڻ[ >}ܭW=Q:ZFXe*"+`ns.u*y%5 GTyθ{fG+N KN/MNE*촸`XRJFDR8= TL*\Jf @X ]lG6]}o?ǧQw85V.9)>u'х; iwH:LW((VL+O%N(bACuRД)j\nt(Sa߫T WeV:DTЄT+|dqDKLNTq{!;;Y oQÕ$4W$@`A ,D#Ͱ ~>HA+<5ʫgW39!ji.1]dk) $+ڡ r( >vhNW1j5$žf:Pf퇢5ሩmpyF~G d/q㽚foo:%G^H﨡b|J~\(lD !>p:zR3\ -); yS?N;{PQPIcWWKf씽E2d}G<4*Nߝ_}6ȡbؾ[3Fs^փ/a K@>:IJFGpPhm2Ph ڮW\Np]ϳ⦈7.ﯖ|a* b}~q^9~c5?^5sd~/:w<"6_hD6/τY%l_@9]\} M'ngKJ !Kڒ_7N54}`OI%[-%S6|q\杛}KꉖJ.(?DhJx( IKG%H8T$4(M (ǩ(yDŨ6jb!FQG@`i, ujp( cEb5'TQ٦V^ս! l HuB$@,M%97'b U dh"h)AX}*TZalLT9焅&+هEDD7#]ko9+;#G_`x; ݁AٶʒG&Kde,/E nŪbVLP] =ƫ.7S67ZEAbK-(t :M'}U. r9 3iݩD(O<eEȎSAc聆!g>ET.ai.EAԉ}G^(sPc-zڭrS4jN\ĮvHc dľvk/d˰[E4TU1C|F)D;nNxeDTV{-)o0I+tQhV3̧s+hG"s<_Ϸ^f؛+4.n\$=$0֞L\h@ly V8<*\Z*EGQa1Xt3L#kV ?{o}ZI eDۦWWdԫB&Fb :$qZ]8"Ǣޔ $b}zRTh_9Ua\zO=뾚n~KeX'~fIKmFH)-fS`?SB +fNFW~/3^+15r(_i8?5tNY L~sKXr5C>?!(A5GIM-xO*$|jв(lkpœ`(W*$6JEM*Y[#"lBBAy7@A9lDQQgJw#+n[Fu! `쐯E!C=XJR!8rD$jX8y*FiQI01q +H;HG0eG<BG3ؠkwlQ۰Ykgִk?PkIk36Z yZǂV=.j] x4m^24{(, {ws:+(d' tr:;15L /l9LOdYFYr(YӲx~a3p6G "e|OO)lЖU%;A;rU srUhr54\WW>/1cy T-KiεCk[!t?&˳z7WJ^=t8tsm2V} f(#`]P_烝,Ka/bpW5x ܀x#DV˪\橷~aI]>;E̽O=Y;O_\2i v0%AS޵Do4~ytĩ跔@|pw+lH.x/qPcP['@H՟=XVT0ԕpj"k=6  Z=[B$:~ B$sVqһGzwHV6I@@N2j@c6arXĤR,՚9 攙x>fyӻ N[{/z[&hZ6$Gca&bz5_G@9J[nֳl3ގ,zCo?Ճ=iLo?hW:WU fl/. A* C}Ə}es1Bv✅0#s+ՆKzHsxUzl;ժjp p@._ɚ%3Y[w/_z~3ī{_iF d=(ûAXƉڼMIla%HZjAu3T'X\R- fPf/[Oծ`@%]҉^ mVYp'(tHۼipiw?6KDry'+->E&xdxvCrYR?@C #{.ys%>gy Gv:<'Z=urxp9 +M+ԉQs A ͽXa24m9MX%k _+vU{tYu3}=~"ydFl1UX3 PG-EJv:jHtԏK+rǷZ>80w˃F+$<--HiB1t` pqNR̉&74eXHDL8P(0s-I&)4ڤZsXj6%Rj8F1aRT2"܈ivot8|~-uՉQj|t&5|ԷJ0FmŬɉ3 ~*b"$n5H\EH@èSJ#䦸PָN)I, 2VQoʠ 7]%Ma[_}g++E1xs3/ݣ}jz pO ~~2r^QT"܅z'd5 و;&];hu ќp@.WA@yG1Y ` -9i#g8yt:yM7^]_y'Og%3V͟/8_e6 cЙue[t}痸1>JpyǞbP拉+hUS%QCR8AFۉǫNAڤ6CO )tWNu+U[-> erLwr;)QBPǐAļvETѡgg: fy>Ct=+[PjΟ +uA+X/&i6?p "WqǷp[ꈡ4=}2m'}FG{_8c@P]t!r֙g *Sq 9] C@ %%.DZ`M@$  Ӿ9  "Tp4`iҒp^ ))KM1O!  )'*`)[%c!IjuZaZ S:(Aե_-  J"2ǁHJ!?12jyjp*pJV4XSN ގ,Ӕ K!H:i ׉2K$j8 (ޏMN*DAkbSb70&dy1vۑE ;B`7BoqF1OA8)ʭ@ Y!L$DԙYȑ5T2Q1h-7 [<cn>u01=+Ez,u/qC25Ş~n=*,SסSsts[ף%g}U/Ngp|s~ =dgfWBgbvL079`d紖 B;&9 >c0!xUř@z E*sJ@rh!Y!LL!V@%H"4&xWAKnݭ.{x2Væj)Ny؞{ ωAཬ(㈘zsݪutr`q`3.d ?uU۷eU!ȓAvvA5ڊUrk<ӡLWl`UX3 ac,6XR $ZRmuJ},RiK(c@9jDR}ΣA8bUMܙ;&7*tf I*ͺܵZ)EvV4@!<:ηJ6 2' :wAQyr:ELuC"rg̳X˂CP?-^u X?B_q$7 e[okPAU,nv}rӁ8Z9 AU^P}=;FxIHzSh57l$)JAީOh»!4 $@TfW+EJʎ'*;AH4U*KIaw$GeRMzA/j8Zf ̧{깇jԛ.|X/8]H@PC*2Z tBh(48k:St9E9$wi/>؋/BUi jK\*M^ C˅ft;n1OhHqu0}.;SqAƙV0>%Z=t}aNhqNku۳powvrquz~y>>Gy\w룵AsYdw8 o|lM" !NT Z9Q +z:PDc\[o LȼPl@4atPgO⴯u g7N)"]o\5-Oc'a\w poЮmy}{ C` CWubP O͵ՈwqZ>"[*JM܉,qMWʳ돞}qϷ-Gywq@@bۋ祉;eC`cҨ0-ȘW,9 1S0̠7,7PHYr`t^j]R֜KObp-6bo?F?K:wsQpb9=uܺ0o9ϾH~\IjH}=P#<E],oAvKbo^4.p'mE~wi{l0mw`~idSTx3w}SVLl܃r%sFBꠏ 1a6@P2 y\xGʵ+kyr5(V8I, o8uɄDC,NKCyZ/iMg3vJ j8 DDQ[ P!\w  L}|׷ZD UW_45`/.!~\rM%ZGM焧9[bGT569 u|WsĮӮ ԭs@lQMĒq(LY%Z8'[T넜VJw* KZV-Պ $h]t u:]6(JN>P/5TQ[&`/; *G`d wE4Pp7b4qޢǹ&2wl!eFlHԸe5RCQxhFjT4Lb*`$Yn! FQì,l> 4 2`jwJ# gUQ[1}E>V}A4M G\I3%w{|јRy#Xb,ӸM+dG[cQymy~%/{,쬔du -OuE]՛#7~?+t`|ޥ6Yr2>+sXdTbWeq_\"3NSa{ey ކ6M9=)m5#^gKH+)\9:-g( *@>F(@30\M]B7 h.lTƳK9 i&FY!|ܒjPHYrZTKLX| e#:gYqqdYfqYԝL$v6*n e9d@=82!"K(eMĮ|(U D !/6h13%t23Ld:C 9gAM4c(@AtN*8ͨ-LYV=/*6Y)0P^͔tFwf Re~ m-^ߵ(eZh` PߵD7l꧈-*=I?MSkk^|sowI oyd!7e6dކl. i. I_NnC6d4zMOjTeF0=% d8Mu¤PVMdž]/MǚU]rbgѬ'MCLyہ5{}{ݷ2 HpXs`PxN}s(aM ]X@qU`GDExK^e鬎7/.UG/%jSo,N:FY;O6UAvt!~l(0ΊCΚx 9.?պvtפiWxqn;cZ#]Θԫ)T,lB3kGۀ?=<+OgN6FCބ]u{&230=\0nd+- x2,(֒ڱs6&=unME26hS3gdJ_+=27@8007<ѫtނ'DeûAkDw{ކWl\<~3k8U 4Tfc2U=#}AZ睭rCZUP h*SḺ os{{8pܤzZmH:h0~c^ѭĉU}">c?}s|T=߱sG$Uo}$#sD"$>>wP}ʁ kk@K%㣫LcU/y\*WƒvkϢFJ,(ڍa/NWRu''ĕșқs<"|霍CBW:ާ,D7xAACFy Š<\& Y#*UaT\ `1Pr@n2*q;p=g;b$Hw͵_`I ~ٵ`k={7/ueIZ,,ƚr⟳OL3?[]YiȴMV&p.BV;D8]r2GV_jvu'LN4)ý|g!J&W< Y(JiY*ger_UpюSYa_>ʧTz0y~5kR6-8T>ОirDͼ9j4d2vZjHu"Zn# =U)aXQf8\\ަ&riO|c'Z7!, u'C݀>7g Ir,[1iO Hk9imHϿSIxٻF$W3%}mY7̋D^"V<}}#TR,J4Yʌ"2̌uj%07QrIR {H,ݝZ0@#4`HJv֜}>yką5h1=*rtIANxA^L}f#JaGWR]M(Tt-ݹxpSaw^UNhej͔8x Z21N4X0c0pX3e !/ gc¶y1CEP_¼"&/SA}8WS_0=:M?y[[8 vh[~21:\JI'BquHߝr8oMG*-W-PrB[Un}mX;KGY3ٗ|㍹;7Eum 63 Xc=z3fNL'~l10z~2~?葋{ue旭̶̐1ο|iß-s;ڳ: uQ{s#gB0*nҥV AIoĸE_Œ3T&/`mOhMBBBHݟC)'R(>#'M09 g(RIe_Bf|k ,dc |  hP8ilv=v̆< d-Q6Fب| l5o3J$hi#0qsKYPNbO Q)5b%Sȹ|*`@] >Lăp8 u/fDs?lc^9_Ҥ_'OK&bU_r%WIv(nv(n9e .z-FK-N ÔasC#xNs|m"kKnDI:Βx*~/;ͽ9.8=AEp%Gk&(Rq3GR'JRwח7C v%Q<ާ$1R.kk2TP8X %AѺUX^՘"5-N#^i~+plikVŽ;xoŵə!_W(tYlAb𭂅'/~~xQBOK7{{1Lu%`E-cUʸV rRPKc4Y45 GZ(80 Y; ?^#LO> $ʛxcyٓ6ßw͞wu;q;?'[<03H IV2teA|.aWDf0Rv1[߅98+W31zޚ)D&e_?6׃<"64>||xR⫟7ggoC x:8qq} $][]ʿsmfoǓܮi1*"'\˿AϘ"hΆDZjPsbwE5 9 ([Qn9%Btumѫ0^\ɀ@YN[VK¶˃Q-dRw|56J8e;Vb\5W3m)}:NQQQQY&<6y}` vN׎sZx/`S("Ę .jJ(|UuBʶ F*KEI4£B(,Jlw|dJE0 {$o o[.Q #y?tr&SB _WK^x`sbx16VbY}f u9>n }9LDـd]럕-zT$II^]1@D5#͒cՠ5ޔQSvDttE<?|H{׀2d-SV^k} ql^CT@$ naFJc Ry00eF`G0T#`wPnZ[ϠZxAj~c*X3B BMm#0pL*tFƘxO#:!r0i )cE1SOOEձAܪeN/;X$/s;6Znfg~vknk2Ɖ/xد u%.6̊=s8nqeQ)l`nܐ_?Cq Ұ$KJ|+@[7.I2U{݄d-::Gv@x޴[z|PS!!o\DSdJ-bηUG'p -If;5t-ͰJV.+R82:o@'kw޺`6#<{\atn;g,'X-f6܄oR {Ơx87R7/K@e΀#c}\ mcoj0.L:+0lFT7| ֚\ n$Wt@+ÝC(>&b]U tQ]mo9+F-; /dght=6$9l%nvcŧdU| fR+9 :bE+& J"DO֣e"q i-I`\3GiJ!/]Xp'?3HD&M9Ga_/eo|:.~2{[ -oH{| =#| ,/n +0:ZŕWz1 6& nzyqe{6OdS+v6$ȋ!A^LdDxOɼ7i swék1oQS\?>[wo>=]truT|UaU˿lK/}}_5,/˛gg2O)gw7<~~pc~ve>|{}R~-/W$0_gw a ^$ko6[>^j@1DZ4gԨ}ώ}BApmpKvM( pp0F؄~@MGxL>{6N`"ͽ[ٽ[ŽZ=E{ڥ}qI$IF-sZ.­P z5XasǑ20T|/Aխz~PRIoRK6z/+ h{ lE ~~h-|ܟˋ5N>g}Kg#dnNv^pqv(uVgJ|sK2 A QZ0T&DSNVMY$Nq3x{rsz26v '@B@4 | aȒ*c 4= Jf5x*-V:F )'RzZA@ Co#^mKf~IPf ]0Tv, Rj{4x5K8Pi_iTAr+Jݔ5xnl 11r4Zi~)s{\s2^Rħ{u~W4qJNyG-n(;g$}rD_$e${Icjm1v97Huc9s9\Mpw Rd4 `r_ҀZhT4v'z;=2=(BLKOGdAi- rʒv;(Ԉ7u58,;A1:a,^[* ,-jXdb).n2[J)ʒɥ pcq-A:΃K'9)T^ jvCDszQ߇p'WKZ<Ao_]IEfGײ~ Ꝙa{yDLx a@eYqԻ`F;+P"Jj@ZNcNr5R>7`W@p!yI,_4ed *N +eN`LPaTP[h:{yJ 4bG@~?Pg",2F]Y+ FYK*fI4 Y)|oV&lO0m1 c[t+H;\p9zTډq& g(p3(+Z3ur+1 L$ZwkӜҹ8k'hLI 4| K{|7 SCI 4N[Χqs Fpoz[o mLr)5j3٦FRJ7aGѐƋw7(_-4 ZҘOu`h zySwĠm*q 彮-L;M #Rb(iXS=k %M-sehu/;x Hy?=I0ɗ \ƨ֜C?-#+Ɏ[ <!^6qG֒Ak-ZнҽXK˰YJ~Ek]KkOנ{б'R׃VLdWjĔTfw늠[wZϵ<"9BU9] ~;y(m>/_>8x~s#, G ^?Cbq%ԜC[(g4h0H|d}HBs!1,qOXEtDAp;į!BoR "̨AQ jNDA2 ѯw5L,&4r+Ca3$Vh 8Hc_-li *wKlZ|R_}p_ښ9);sD1Y.-P88SW(0!+E ׂKVQ8IZI m4e3t0ňEPӰG9|OW4yK-b/'B2q?lꁅUHg#wbW-\JoY,.tαGI&6vekYdfLSNMgbLJWː{؋o/ #y ڞj88Ų} A1#ux(}sx1Nbx+|y rV,,a {i{يhN5) R\ԨBR.ŕu ڃj7>R|۾k!ý]QoMlπnpRTZ[E Z0#HUj kM )nVC/'D4M6Folz:I3@g0!%.&jI7Tqj!@jͿ'jc]9lH+6Le!d(_yzs}]Pwl2 :2ZO <2Kv K)>i3&(=<:J_0|*nnDr> jrB2;vn'@LJ3%g`ҁ;Vq"+fWh]5B{Qᦠ/ZH5U{9iW̥$^. Lm>kf|iz#@g c٤F-\p.MI(",4(Zi LAq|<$ zF3 ceYZZ\8l3Ly!1 d&١G~'J?w"I6v./);2%Y>qQI,KgFp8OXo9&Er:j?Uk}Q[w3ݳѱWu@P(.}AK, `[Xϐ4DjJqiqs}ytHY4;Jڅp_̈́N(gR¸j \jdd)Gci0 |;$#йJ(("s`6qSp9clohohohoջ0:9#ߞFjLHNV", 3ZyI=EM..?ذ_IvP a)`ȭhxƳ+NGI][r@ WZ}8ɥ˫Wŝi3]V(L lV:d/@&qSwWÖ wI:~.[dA&A&Vs-0 s:GA0cdBCDc<@3!SZlMgH-ԣژdvHp 60,lndB?nIA.?USzlW7cVi͋!ve\|||J.~8KAKl vy˷ˉ6̪?>c)yMyy՟Bp1_ƥxnZI2۲&p뭥읪ppIc˟׹OnQSbGM4æ8tYrB<*w-Ԫuzlwd7sέ(G0ݮ9>YT{ZYsD8{b3aɺw%gm2TA!\C&# o<9:L£nu- E6\L9J$d:IO:I3t>F˧,Z3NK1 4S:C񞭥Y @Z8G#U"X%UHIVN [R2)wMc5~;4[<F^|:PH 3{K AfJ,. i+`Ѣif"iWe:oqx3~22IܴT @M"B UURH, l}T)l) 0ΉԖ2aJFbi2R:&2%ay2?5LjЌRx\K 3)#&R %w*aS?o@a`2h#*ɒX"HI|Ÿ10J+xk5B1e]%-_jC6TƐ/i-dosYXV <] ƢAIP@̍XXC5)w;n{7-uOѢOv]F10F 0MYr]׏~Z^?M}Y|}˩],M;gcy< _Q-TlJHcAuH7lIu(4nȂG;za?k~EӁ*۾rI%"@tNS~TH2%!DL* ԶD!)Aq.+T:\WcK=Y/;ϥMY(.匶gͬ׾Q:'f`1얐xeOvKP$Q29[Gݼq"Id}AwM Am%v0 6|$%ϵ)ߪ]۴znJxlcy+i.eHmeVcVP9X#=ǿ%QC|v\﨡S6WMuL|8+&yhUǴӪcM=_ZEmnmI}yD\ RsцqxA8*q y&eSHdǻQ@d -}FvŸS!n[M4ǦPVb9]r1H1gnGw :%j0_ݺwnY6vh+\ RL'mgn a[M`R(jD(}ڂ#'yh?Ř~ 8[;]!x_|.[~W]3^jᥪU)s1[yZC\tb5FnP=Ǽ3e}, @aFAvyͰ2!w 9 c@|N50xtn^鸿pqp[ǵ"vny$A/n?\^gSszub{mnEmoVMnT"A`˴|?]|VP延_ ~Sg߭>;XL4K)Ԕ*=BAVhxsrZv.T;bدGm\W~}w3Ҋ@,}||[hɑ׽v#M'9T`LفRS%zBUԄQ4 W34Pթ5hj.,䝛hMjwac6 +k3Z5>Ta@?r}G.Y%yuoNX% @E s6#w,6Пf{qާ=4D o{q~Ĵ-۔Hl(TbXb=~AF~|{7㶟 R!^D0]^ֳ?/]C`#{·("W#//ޏ&65BrT5yx[bH[S/?2v8:Aʅ J%ުxL B׋D:6ߨ#VU5SԿ\ (Gʟ?|xט]dtZF1A;UYkw6(aZyi~(l{! )- WNOp#ȗGNe'3 њ!e;IȅyLP)#!SKTӒﰣ$Lмxzt5QF킄,}](Ⱥ1+8>=($RƋCBgR< X UQ}rٳ˘W8LRH=pGÎDdI2V:\1DT vN7?%{w ΧuB S}7[M[1^^I?'Ӆg_ V &NMX-(J( 9:[;>:%|ԃl1Lt- fB(Uњz7yG08ͽE]Ft}(5Y vIDZ꣸Zfk̂ ,_5Xm 02&5I[ cu2h6^5Hcn*OK)fChKḄ6BϮM]PP]81dHppFw4 PDw[ȐH A j;1b:IKX7u‰]a*ؾg׉KILoݽg e(ɧ .%އMFs vL)E[[6Æi=m1J*cyb)h?Dg3No(=kK1J#8"5p.(}p40ڧj.o$B/<}{|ۯ~ooL5bWjM#NZBew%2%*r`k+x%4k*f j ҒǤv`Nx&u0kU fX,YnvT]Riʩ!:zUaTG觩(͘cy)ꈂH |VH/59JR%j$suPmpZ@?x 9vJR,\>ǽ!͠"k zJ 7f J}7S q^pȐ-ZqYf/"-*Nj||Y~/퍾^:|Hx|J1 Jb'BI4F';{ ^ǰ>1z{P@-ЍB8&uZ7nsM#T4F3Dd EYr_ۂ"US@󄾻FB њ`GJc0DI4VVZe5ί2!].&|^)ʀ'?͇_ZpFbKX0EQل׊acZ(,kPs,B߲rrJO;فlD!  5 $ 4bkovOp%7O-?<߸3zkVxc4IbbDx|-"L8m9+yuor$?~e_% w: __fTOH6tXpw!5s5ibʼ"9'q>8#b26iր|godh3;JL?9utG򷳝nD#C+B@ Pǂ ?/cBMnD:뽆/lUU{s4x+)@vD/$3 +ƴ>T(zA c;V'%xF^vxxoV@LB1y{ q@"b:"boCnU+0EVD#U/c6nh;K5k- py;+H$¡O[<]*gE?~ZAί_UDΩ$|[2> LjI2M[@ٰY,e cSvFNC^Y'_G7E!WMW4[ǟ2q4ڛׯB_ZKB|; 95ta&'-Ij!z3>C7: zJ]@)>1} vMv#,'`Jx靚kzt-O(eD |U}JZLxZL֓kmK')"60fLGI C 3(8DSi.}~ #;} i%{[m6)V$%ԴEzoѵ,q> xd r0(& 19R8\c_ήkyݶ11;kZJT\(K@ ,3L;J"go5*&# JmT2x43~֋~H^g{žJ]x"Cj DÏl_$/ݬhyvƓ :d٠q~u&ttxrbQLm_?4" ߘ}zzhMCN/\4vfo߳1ͪXaE; 7~_of:HBd-V< G_u}秐+r}NEs \"%f>yOR񩐅qͲ)Ext-}G6N-1ΥI|[ y&eSOL'L"v ic|Gmb_nEB#E#*l0UA*|)I RG:Rk5// i|<7`yzma;1}E{1nɢ?IMq`!ޣ; 귩 Jt4粗n:P|L޳1ѽfA&y e6.EX ݨک,ggAހ=&Ij{v)Vj;'OvN0ʘϙ f;鷞,! 1/Ǎ>Q0UAL 3`=G T-ȀK ,9 0UyGGh8 xǯaL+&X8bME,E*DK1eCψ XiEuP[d k1iC&)`΅6c$W I%Iq`5p %IK098nj_7t{*-uCQ=TGYoq aȮv3h ZeHhGI{|U'@e3&Ą&BO1;}|M9IC~8eU X?9tɭ_b}:+UPbGS4&Yl.Fg4*E|].-S]%C;ii@31u7L3tJGL$>B>q(.mԈz0p$NLmϤf5)d S榑FR!f4G55 XIjG'Cl ,v 4h"P汪k☤ (gH(92r5bFg{2e7Go^i}b[BG1Ia΅AueRqՓ1Ff݊nAiBtZiWvK2Ba"5ΜI$uyǰhA\:ۇO9Jrǔ[(#]2HDy e4&>TV0GJupbвs?MN˹| b1I_G褐*徣 7*EehޯG"W y&eSqϰ#Qx7⎦GnN;xuN{&zMMq/EHֈ`Ȉsk\̘ʟtAAk暦!" Yʘ%Cw! 0%!N$QI/x|7NwLZ ɨ.&a~K~$#)kRٻ6$QÀ?᰷'/y((1߯!9C<9%zx=տE>fBtE3{I$Ц)%?(΋^#1/߷O!Jw&џK n}8>L0[?.1I3&'V6$'R[7 E)Rc_V'pINW.ʉ=g0z'9LAR"Yz P~y ͫ`|SAmE5Nޘ=]Bm.by6-*I䋹w7W6HGįd5.K4|xsj>D'Sq\ r&U W n%%1T{*yRMQDEA:S%sjWi0¬uxC֭ <p?Ұ3/;i<9$:]5P9|xǹ ݙ:s;(Ze W|&]dDPrUt~8Qe(=%#5K42N9\qqDH) 98UA*%(Qs%p5Uk@A‰ӛ:Xy6WwIE|{:j6wƅ fi~6BWqgSa:jN{{2۪@kE< 2:YN##+9 hM :p){g!cFvSǸ]9,:Zg)kY(tgKjQA"dH.ҨMU6zWx^>mBPMr6F15IъAo4`%ox|j|{'=f8 "O1!E5_Mnyw<տ BW6s?s姇'hKk_APIu}n@<"`EaALTlqLDS/)X<+))X_]voBoȅ$ jq +B8Px(buEݒ> BEL:)L4" !8m34 GUAOA{  B"!hov#1fn-:PctA%@uZPIܹr'f8sg(棤VF10= % !`kn`D`Z'y,F{AZJP9ԀR*.9OE읳 W,Hh(G{9VZ"lLo&1zht1c$yMSnQ,铂v#R{~6ϩYcJ>Dd2ޤMȴt(%k;iAa`ݯCOQmBO@L!>Ea1VBPOw TK؅RJQf8⢆[]{ #AzC2f0fT 0T@=QYA{xkm+y<7Pu@J$` 5 # F4l$`! _=_Mz>BŌ$ғ"}/|W5G|pI{AJc~&4dtb֭\̑cAC9*5n TIu?gq;F 9׽A~y׶ti;G ؖKrMtz3 Adwѧ.F.J~e,Ot!o/ܜdR,t/~/ 5@LJp~*_f7gnx5b_>Ud\L>6%ɺ_Z6(I;)7GOJ~eD ,#ix̜OӺy%6潧`llq.d^0aI˓ocN0f1YRq𢻴?!ʇGi9|~'H }[' LSl+We < <  CI>,L8xz$:5OuibjP)%pt'KOo㯝R)2^9pp P|;[Gch3Qd5VEg zѬ9yu0zC^,x2t~a}K=T~ -/]|oA5âi.;jh+:ĩNְ$k%?bbwv>faޟ oɺ򄹴JWֶƋxWy>zq$Srdz2*m18>4f#[JF=>M/>Ƥ@;Ck4\Vs{>;tkzpUV Wxlwryx@3R"ɘWfUG5T HЬ 'lu@cN" k3Wj ރ>j+.}V($ N 4&8a`c$ e$xۜiۗ-%D 2ޟR">F ;, вx^ 9=TU)gN1!X@h'AJe yaƒ5sABۣQؼDu׹DPM " mAJy:bcewoɢZV6 ,UZoJiպ:ʌc=I{3OaKԫעm/Iu^' LCbL>4_.g׫J !TAF38UGuN5K iNŤ*xR([; O^R,Ԋ$s%s+>t|巫~s Y 7X kƅCGqR~LȌ> Tf<֜?ʼnѲ™&Ͷ 4Zp*hEc`.:y<ɥ>lk}WIt>R(ۏnmd;Ryw_<ݜ-K Ԇ^Fx\҂RJW4)ԕ H3)eyhc)HyE/ 549|?ׇ)_e KRiNFS02yI,Vk!d?rhto}xmpiO>?, wwF4 [E9 ?v~?%8phy ؀<ʗws!iA6u#v2Lq"ΐ蹨\JGI=L .{Ej^hO~{̈́.Q=o,/}EV{/[}m{-` gSuUk./bBx /&ŰnP7ux'`7pS5i5/fBw;R/.ļsbL5!]ι mŋ<)T2RHP wKsdno{#B ¦B G30yAziD1M1)Sc&:7%!"H7YB<VF yO'8jI<(D\sPVK) HłZBg|Wߧ?O{s[\|4眑oŞC-v9x}qQA-SADS(jXD !@Tk>y% l<$c)ʐ9J*Cy3'T!WR$ ޣ=f߷o?EȷxǨ!xD22:l9ZM g.o_e5$-t-i>`Z*$;- ,'58*as s Ǻsf[${w${gbZ㡳YLWo.yb=@py.A^,ĮEy(Pt?Y*SPSZͭET$\rnbuHk,jIUB:6@a}frj>7kdHr^[H|X[Gi}6懶|rb& xͬJ }:qKTrڛilVi5E=,OUzwKp> LzS `e.r8"Zj$o(j܀|6ʠa*%(QĔ֔[LV b`END0$4Z&#a^P4j ;~p>{(cG-"MB .2>lIR 2^.G ( W=cܷ Y.͜Gʸ/^=B25` HcS.;dzpQ řPu$.MB cdOuX1{ŻV,πL6R%7:&n*Vtpޕ04 UJ4} <>fPÙ:DW_?7WL M1+L0WL:EZ_J(1=z=z:FT)zqŨ|] >& eX[1d7<3g3"'2r.|m=hDsA m ՞J=0QDEA:S%sjWBK j/j=o[P-(+%:.#%/%Z#`v= g5B[Z:%p^K[+T3";{+!AgP[i:>\_-x"U|Xm"&pIq9|I4@T+eߝ?.;i^<ڹ@i$#P$%lO7 i61_~MQ̆-8&ӵ1?_\m'}"yoMﱮ@{?+> k6:=\]SE3uMظ>=FޠBsdz#+r[,Յ,L)ڱ/DԱOPxiz:팋? @8wk]KnmXnlJ«m[[x4xuBHhyCo~-To UȫȠ7 ߿+R/]45wâSr!О}dT[\ZVܸ HJ/- }afɇ[7?{TUU4.nRG#uqB(]4RgUFSA?^CGku3sJlͧrNo(RW?I<"reʄe.­_>яEN>õl1mXLx}lksb9ͼn\T6B$Ui# $ċF?2Q!-B!7 25@Ƞ05vu4 Я~f~6y¼7----˶>N1"Rdsx? -[ΔfYnT+/cj>&vDMot5XXp·Q2o~&͏6Z Tv%!e\Y"M:(T=^P#D] twƊ7^T(xlZ^,V]wL(@J(tN6RsЊ"0u& 7}#Sz +Q .e9>Q'Lb) "Շ;C⁧#>$t/^rZr&]Jҽ-EJp)+:*dz£HOIӵD<#iE&e=L4k8qg@{L 6;Nj͊OD Msa~pS=:元ضfCyN/hEۓggxG? %1`)Kbk6Ԕr YT,`L=p@_V\P$RdydL !kbs9 fTZfЄۀY V˽nP.l\[46C[I-\L7ueGaI=DX4Kˣ)ɲ88r^ iɼ0Lka åVgbGXK- JV(d@!_ уGְw@Uo;{|'>.|Q "VȞp3Ċ̯5L~[4^.!9#QaS4GsGz:A"S6TVG¶E&5U;B X";zF501g.Ԣ1Lo;af+b@:c]YJ KYB xM:*rbP)"smXޘ?\=Q2=aV :Q:DTy.G}7*E x \Opܤ0mVgF9o\ͣT?[O":̓l@A0q}}yu]6^_.:Ե`fc2hY=Msӧǰ=?<#lfWD Q-xYa5\M8.XF,$}SYGpRNԙOx|D|%:^fq AqæuSa-{(%vnC-̑$RtG=*JxOhGl%ʯStL}T}IoYzp= Үk[0ʺ˂pqYВtqUAKqח*YVSL(fxO\\}rztR)qbmbөbTiSꮛ*xƚ`<_!PERFQ0`Dz9q2FAs< 1@ 5 mfFz~JG#Jɶ# * [$ VLhj1afE[A,7.1R`4 D٭a6Z!%gS=f^N өP~mpH8_1i׊DZU:9:u@=h%23&7@3!W2jYE9Þ֐Mh,qsK8ǗR{\~]RE# J&b\9̈́/ EM +Չp%s6Q#.զ7?ڪ}蓦j-LUyvc RSR>c5eS RJ՚ޣN56GAUS6?HlBpmdS({lzr[S RL=xtɼ[㉦z6, 7ѣmk%bv.JGHg*v$)fqbKJf {KyM%~ӓ tQqעyBz-IiS֑ m=>hNs y{Y/rx"s{!)_Ri<B/Ѱ}.YmtY=gb(з&o5K+t)tP>۴aԦ-e6mu&R=\/M>ecqDNd)-Z󥽐\Z@p23YF(8D9@|2 {|Ƅ3;e~+DĚ7@ ;=EI :=.S"Pi$FH)F K84"x{5GdT{Hg#qt1#r͑PSCDbP` >GO4W8pQc@\sFj|>Dc)an@! 2NJ (P#.: k lk-Xf5mlf=<'H,rBqkker@CZ v>䆳v@;k:?w>t<=9yAb病kOa}2M'!|; 9-| OfRFA?ξjgg}F;cgEP$)ب@C+.;j Q F#ءq)) z%m,&{rEl{%iWU?dCVR^qoDcbx!ȤU j){br%;|4P ~+fbx0bB3cX9E7x-jDꯖ5qEq…"U&*G\#ne~2D$ )dŖ(Ge~0S%ʶ86ogӵϦP|;l.;Nh5[o旤neJH3oas&\%Eqa̺ f\9=#y8ӂZ>n0|3\ъ]z{M-˦_&iLp44љtV.ȹcN׽3f1Ad3a_^bEbs-]D]VRtZGcw \Mo#% Ƀ=F;3@t'Cȣ$yKHfUOj- Rj*Pg<<;r\[q. 0vhw.yY {6cNjswRkEotJ+w$$$ըbײ>mua0,5>CczahF;kx 4GNLn {Jg+0Nf1}u>)'nhh">eLf_~qZľK6ͯͮEuac6(vu C`0ߙMfL)Oﭙ\0 w1<âE\AJt:\ݽOx(zκr ;߰M=E׭[*!W\셜v 셜tqU/ 9^jP7Je?{Wƍ K_(U8[[wٸ$Ra-RDRUCR_ypx-i8< nߜ`5! uί&jm__x<=_Kp ǜGQ؅;Y{'s}EcFv{jF[,Dn\;O?DwLdAyD5O>VFE<5aUt9-ozI[vJ >- 'U['Y&aPhjmw'=$=$=:I @˦SJ8Q8PRpnI n?-e)n'~w`Oah/^ܻ=WCv.F"qځI6 9a2]Ç3%Iu5jcjt]M_6QAxMS#藥>Ճ]=g7 T?Z%ȉe4)GZ*BT(R}׉@@b8 Ë[j,4ؐ>ΩACa1ҵ66|xk->ؓәY_[֯@Pގ'ȠEAu8|yD2I8+L 0bH*, Qr7e)Br RBj )cqI6na[j!?e~P(~|2G~>:H1|~=sߎ8tv_Žqyn%=훥#o}RrBͣrl>W| U;jP8\s;/ޝ\jOT볝T. &u!gt*>%jgv$08 Fu>vDqXo[ݺА3WMB N" {,YG6-+C~?+EԩBo a[0 "fH x][_}gxN˒-37<h k~Ұwr3&5M>Ot~WإEk6\@x.:ɷnY%S7k8l0``gk EVm~oNL:z#3'w\HZ>wii>f>1~f3n#*E8FVyY?G7"56Wx1k7}s.xjPí6ʍznfT`DJB<:4ccα -`"9SR=:v2OԊDܺGVyS HhE '@$V"A1Qr&' 靓>_xj_S(iES'EIarS=F[gGS1(V:Qb/UUe;k3*6Ebh6 Z锜6x/?vzmg}Kna:yC_RB =Y=g@ʰrS/LYueZS/Vq HR3pٝEԨ_6mRemF*]PV]OvSyK pcX*؃wݗqeH5\l5(o|q ci_ڇ!'>.^y`yMJ v=k6<GǙQ6K)hf1,5HkL* 2 +*aZH+ƚk%,0CDA R KkA pj{pFUU5~4F͹(hKA-@ e``!c͍h0O]2qSF(lF /t]!4 Z(%}"5n2cK1dqyQ6 QA+G|85D }-zSuAN}=wv 4遮­Vw Hn-2뛗ʭV֣.,)GGk*5M?StӮG%dcqN(\_x#OD kEĢ8o^0{l/ܩ.nս_zbhG=;^6 qU-?5=Pݒ$ č#6f5| аn%h[3"@zWyXN8KƱgfN%"VӗՅ#ԬM+8^ݼIvzbbf/9uncVΎɆSR i ıA+#o⤄UѨO-i!![R( ?N&ǨQY$2B'z9¥J\,Y>X6ޟ&'ҩa/J[uǐB`Fݿ}:Xq& qv²ib^ x v$C~$4m,@:Xbr7RNDq jspI[1Eɷ' Nj#tHYBud'h+=-И=iiW))M$Ě\tшXrh"Gi?̔㛖jN@\PR%Ƥr-c +AJ&RgŭJS%mF \`!ٔl͉SR o1Emoa[ZXQNF2АZK Tϼ3ى#2gi lHMcZɆGD_vjs!Sqq*(<>gW>+_K1˒ %[W{21|"(>{0߼ݥ$B+3 1_hmijBӡ4=@=D-:@/(rm%<)(r8-,)XDk>:C@n}Q !FRef%A~džR!XR#"eA'4DH*ackƓ'BReujMQ}ڞ̿f3 Pa1`"EL q33 Y0I{g$ji^t/뛞6H?LZYDŽlԉ=Nd^ً׻8MI>_G5VlǴl?)=&=)bGZif)W٩4K"O*947 f aFĠL"Fss!]XaV@jaS/2 NVT'//F\< :xyV k:\ , Hz:| LrOuBAC$WmO>AEѿgtq!$)x`5y[4v4\<<0 8 q\CMcPքK"y3hgH.5 9#S$Zr9t@lKi`&h!UΫ,@tjU~-w`Tp1^׳_3;4/?V;~w^݅귺~Zh))I r?yfgb!XS7er p fR,]4n̾%vZ05dcÈ!\Xa8Rr #P 2D9Jq|>4QNh4m塑k 1@z`:rk%6Za % 5rV neem#  wٹx_wg[,ۃ2\tVWbmԃ][鿾_BA|ϯ f-?G?N5WOWfc7ԟ5WnxW<|\;M}3]'̽Ϳ]M7y\Ab:u{x | ~ܒwMa~}WܔDmWE97ܐx 쟮FſW 3G߬.ox72\G2/I21KUw74dG'v4}4]yicwX8eTٽ-0ɄfqS;47̠:08LyVa UIր\XLB*`Fܚ.5set )/WF?X> YL\*_[`49xM8}}&4hBj !άP1z:ϷzRJ7|^Ln{nv"礘궽;5;rT,ےF2-s\5`r<(`H3b;;oqNǏnnL drN݂ qJ(sĘqk%[uWX-v.0An˘ќh:&SIQmћK)`)DyrL!c  c gm-hg̽+]R؉pCuJQy/}(Jq4!ʆ0O̹"#1Syv(iqʟE 7=CGOAJk%( ~}.GuY)+`?3Z-Wq1Zok7(Ϙ:?nb%{8ΡKR;;X2Y  "u&9DY=%a|cwf3K+0;rB 0$3 dH"j F@'c' *(8ObuFqBKyqL4m׌|/Oٷ=bUvdDw 6~G!Vud54F6u#(2ɯb䷞u;?w݅sWx' ei84YA@cisIb@N3m3qmIt_JkZ,0 ד7ɻ8` ߞZ/CA*~> DM9n\Is6!T;u7H6:yA%_Anp*m8 [ Vn Bu;?@PUp.?Z CKbnd Huwm:ݖLǢ5{Pn]ٜQ$CUһ`,h}?*˦iƀpZ[kzuY C @1X奤~;Y$BEu?IZ7,;M'J  q33(3gCdNuqߖsI#,48ZL")0uLARB$S>ysM8"Qaz ܻ yHcj{㸑_eq,*VECp0l[HcEUv;92-yCQwbHV=U!ڨkuU*v艉T D5UkudRnSn]fi[hKΞEiEc|w1~.R[ov!?`6KFJZnлBC/)q %1҄X kyr0T`0Sv3`}&iؔ?:iinQNwtQFGA\݀CjY NEuA-i*ѳV]F<=X =LZ?z2|h<}V&0eUGUPD B(B$a铥<{zfnQH,wd9=yV3$ms Ze:@78k%j{ZH/2jYq{ߨBsV`*ϲmEn>ͭw,{C!f4m ̴2f6eYJf]坒]^td@\颉VIgeͮ[-GqN|9PZk>oD~bM1~ '!sc[I֦g3čFUΚ$˙7Qg{&IتKagѱ'Wz8 ]s&+MCЯ= j<ҸRms<av<[YŶ/O+Cـv- 1LJ 16wÿlҎ; =x!p9ոɛM|y.`E uAsja,tC}qaxJŀ[~/O=iN?8J!:6:5:mY*R0^|C.3*2*J9]5PVJ-5D5E-=$nI9C(;++-'?駃^:N-qv1RuOF|驼:qNO?5H=5Hpj'ΕtDԡ1KoK^XOb`RXp"|gF0cvJ~b##㤭#عm[NA8%h`:A5ݛNc0ꨁ`ʱ='rE.bi\ x )\U \(NJ4 V"OUAu8ݎ^Ήzyw?}e2kYo;^Wz?bMs)w4(CzrIqayt'Mo\bd(Wn˥N-嶶Ukmo3oztZەu#V`gԉ?T=nl<}8"jL4Ov^83l4߲F|e-;yڒ;+ND9TqU3D_0Nɞw|asn%{Fs&!A兣~NI8&2QddW= `&Kwi0 .qK49KMpvh"?LB'Q{_pi=#H5'm*|L&S^@L߷Y:lsy9ՙSE6^4TҪ;gđK\.e\.EIo xʼnkuMjvN7Ô7qb{v4<5a Rk'sfN4+.@f>m"G߆r%AsY>}Яw1}Ӟw<vLk:i2O2!׷;5T^7 %7eM@{'!4p+[%-VOJ9yұ'_7@n~ tL&2+M T5;X:&kJ&GjK0AUu*tf4?-^?^]}pxOe,5dc)cY*UUY&A梪6\? јvb!֙hA3;ebLoRUPl"2 Dq&ƂM]`Z4uQz ;ֵ&wE]H[l튒3wAl/ ]V".(,[ٮ͆ "M"WZ7v^:e>1f_*/[xZC뫲 !8-;v d2ruAJ ƋE֡DU \(5mZ`Ț.voIrhUد]7 k' q J| T *[Ne)=ZU qsbsȴ<#偼O!%/_ ai~ 3o[Pc.~ɣan_<2 u+vW; }fN~gMuPjSF;ѭUvٝ@Mq$Zܼӯޘp'3yM_نztsw3^'Ŷi_)ƶ/vR,#ʖ\ժ`hC\K/} w0ۍA"Y'+߇,V,f[ xs@XDjPp^P` ,&ߞE/Q"< `1T1V* d"9p 2BAv|C!B%71?P W'qʐ,>qc_}9Ax΃K أ J|WLIxgJ0d_6DvyyAc̻mN}.$L DY.TiᦘF)a ᜗BADNnF! a,2n%9*.hVb@GyA *:ՁmbDMՂ<7P5sP@qqɭ PMi&8pbjT@,I$>A$hTxi!@t''9 =̞pqВ}3 1#i᛺w#6w]o_}.#O_Ӗ?%fd=w-&%^]uJi2A/U@("tu*0*zX:@(chq"` }>R] Y d'P-/[n=#BFhx ۜj aoF1rw` ~\ )M$7vwauۨՅ?e A[VcC],^.zy+.jz{%ڐK׳Y\ۨC >\c ?,K^t(4TD0Q-vyuw՗cޜMs?-˯f?6,緗W_jz_m^}w63x0(089)Ļ.@ b F%8RB]viLK{Ku1!x"&HE!ʶBxVȧ+e[a˶Čp|&"e ;,rk|v%X'"İjS4"pm1_bso\ݻVbF1pl/?-vU?IgGE 2Dfܩ iݙ@s9l12?I;!FVZ-:pdi$C_ 9G"Ǥ(jUr$AL A&GX( 4QzS-!$@<&ܭ3"L`澀La3&%)S-61zc5^"vB/PfVA"gܚP \q$\isn8u?Q ("h *I )f--{Bp'y+SHTS^9(?U!礩9nqt:va!Dl ч){ލ zTN;bۀ's-PֆrM)>p{߻J8wKA餾#ƻ B?7hwkB^VmJA6Mr0(NnevYlVώQP|ʌr..[)uq\=;#%o.~j}rZp}Ψ>\~^l pZKz5X70ۏ_8*_zW0Y>3&B*!p`Nfi")AFh3!AfнO~K0]@0<[Vfշww:6R |NIl3 TRiA%<7ez,mHRI1[3¾Jh`hl+RB+Nj4aZjiy FJiy1_Ot^ !֓6|4*iy1&TJ;L2ȟ-W5)9bJdy EJIqw"B<`9iW7H^C^H$0pDY&+2Bd嚚L\ #yebyO^/Ÿ?u5.? R_\ERTԠL:H91O-0P-pĠiD-vq:`tFViM|aղkD|ѮsRN1&<0^8tE|5T+WB̪^LER ŠZ+~]M("PB-;vu.Vю}U =yu.U&Ȑ Qs'%hAJ+!Ðc#e"OV}s% ECoLg ;.@C 4!V# S!ET뤾#i8u*k29?a!DSl A0o߻AReb:nF3 1q--лa!Dl GM"6 DZ'# w},?-X+7њM@bI%mPz>R# [j78S CRV^KKc \p),ᦻg F]; (pW2u2@RI -WZPBPs &'T{2VnayNX!KӴi lR_>}Fx%@ACwBTޯwxvVO´wQcEJ%dT.t,4* F9ںP $wmvqMm 6SrD`]ݏK,O6B|@.wu)Erwg]ctB:\bf|b. hVᲜn滅Z~?sQ}֟ނy6/{Æ-℺?>i,br[=9q&VݝN)%#v(V*J?h{$HO/8)Ak+,J1sU!u >J R½U!u fg+}V㼕¦EmZT_ש1g+}V qBRY3JOQ}]s V0+Eb")AJҒj*J& .R^cq!WDTHidR !V3"0͡T fl,jhH֔qPb7 H!0ZHiF?3Aی kTF5A#N5A=³40eFjMu>lJR8\h iT\\V,TZ \^9\};m3g#?HuK:GFύo!-)sL5!Z=#p-C)Vg<#L9k&ɋ' Cp`>y&nye,wi|3ÌȂd9*C6:GqlO3#(M|ZN+sQg:$9a`Υbà@31=rxj!DQʰ.ZBY\: GJrE@ZBc)gg>+̚mf|:8gcֿ ﵒ I>U,?k%%@5D(%m87}uC4B4fTd:•jl֍"V \~ξ( mmiWW!S4;ؐ'UbÆ"E+7㻜i8]c"v&uRa2MmOgG;;u#@0#۶ (mpb nc'9X 9:usNݻV:uF)"1MGGwTmgEQQ͝LxYcɝ:wXaa2XV s`2 y±_ Fjn0@ILaYi! 1BbZ?H ((D~DEA&_Şʅ.6Uw˛|jJ>fKjof "Hư&ALf90Hb+k ՠ4ez˺}, `џ( C&=lR*cSpyeR_ r ద8ڮ<{PIёήmVa(ٯVD|Dc5hpٟv*)iYU7: vy^{"'~wN@K$b[8, ][s[7+,l@7n؞ڸd2UVETgk6)HPAC/ht cqn_O͍/}$M? j.SG& ۉ;bw]7[0|>O:&x.7i%=I>\W3¦.ģF50(ь<ؠX0.6֌0xMi?Mt7jGeކ05#l"^Wnj!*>\anugʳ֣Om GN|rѰ>cIP2S)EPހdr33x]7c[f7YLZc ȔRq礒&B&Y$чjWO%r~*꟯dη]<=WsWo@^@->߈u\J`w?B0Fh;Jm_?prVK]9Lf9K8G|2Bp^Ƶ;r(!n=:{+^NtE3$w!3pg.*+L soP+chozFݚKDƪƸ$$Jnn+F@ +0KM:>Wd%yȊ3ːA!C^")k0Kż;j0 @΢1AC$.s'\RQcّS.ɋmd>5j!w5wRUQGƄI X7IABk$O%$F9UF^[t/[#QOnW霣OSł$CEڒ\;j);v<(H+x$-b@񛐥;Kh2q9K*ⲂjevdI-')&_ YFHIJ2p";ILgV˥[Q@f(/H HCt^ &C M+a"%< !WQ/7kאo/ DA>:E}*!|ɮ1;W8YZ[z׌akjĻIU @rA +4bMԌ0B/*S*k8\cY- T0ֈF M4) )p8 -A*V1̬ Vv 5[{z}IB7JXhDjC~@}N \k ]>G`D ˝ ]BE9BKB~9ZKҖB)Ւ^ۅ?O;^]CܦÇJ hEnĎzgeYΎLE&C@2.P>H>boeANVX+gX IɖK⽱"YjddՆ0H4Y7rzHSlJ=1`#:x@:@@ǒF3ᅉAʐ(p<MjOZ5p|ʤ=WzwԜrN*p1dSQ'H&Y鵰1^H4!ϼW+r4{|#2bv9@:~8X3|cGCZh446Iנ1\qLm!fMJVTT^To콨W/U@5Ŵ߀-# $w$zYDn 1*^CP(%&O*><4F-^v:ꆞhd=m ȋڞmXݱa+ac%WYi޲@Ť7#G.]3u=b䜾lszFu`+Ul[X[Iw§?]4% ̯oWtCnY]/f-ɝ>SIzȭũP~_T,f^T6a^izu:/]M͗_soFzt[ES+OB|ħ^_+JYGIȇQ_EŨ/eiϯ, *o)ύ M' E*A2aA|4=>bŚݝIv@ʻLscB*]^O΋t3 tnRNl7:&45';.HwMNׯ^vT~h|#vfhVa&OE喯`cNsQmgK*w#vO;a\ 24kLool>\cj pxCrw_+ d]dΏ)̅tk\|{O?(YyǣW'7P|?rWd/iz6[6 yr/bk&^.8#䧣0V!F]z WUkQ:zco~R7u\Yb>* m6HT>K89?ܔXj]\#Ζ sC;1 ̭#qǾD3j Fo^9IwG]FOUL sU[-&~n~N_M!.zՅUpu. B9 =HTݩwٓDw]yHs& +]RC͢QOcIP2S)EPހܲJ'#..lUv{:Ȱmatnev+eқ>-oz~[A6ݦy'vj2 eY4ڮV4mIpy̝{fk̝3^ɝC#KyY:e3O?~LOY{H/ٓ$V|n)D; kDp0"=K(K^@(F,))9E~rSܞz GѲlIyV-_K'&a\v*ԃ2er:"R`;ASʔ^^vϿ6]7:\Tbi\4[7 uL?hw-qT-%*,&)W?MҒ,RT{35cxHTE F?>'Ni"\Njfjʎo[7>g?]^ R:{ns=sx}ohUR c5"R &(4kZ9h|ޯwϏ} >;>̽Ms{Ē,(4>)K(pTg3sWh\0km4OAd0X7)ϙecYx6+cv,zϫiׂr+*S}K[ZC@ykABIPv5j b6Ξ! OK͝Z ^q C"Vуږq"*% bIBԄFY$綦 u/57T ߐKUHڐk;v6/cE:z5J :'%ڏđ1>ڰ8#ZqձC<.^}Ttяk3;T idQ{E.jR1C邴{ 3 6SC~ *-OCjLY ɓ 2k巉l*U$C`R=aoQ R#(;Ҡ-I|:bѧ#B h(9h%qe9Z5J6Ts b?&;R;'NDM6 vP3v~߮Rb;?̶{DF[;_3D$4]q"q<7YFN7e_k8-oxidh#4IB1_^=/:~n\un|ӇT~j̜j a9H"3ZO/fԨ>QoNl?Ly"#xfrAd s/%=|3Ak" qaO<ۙ'tKɀ!3$/N<zлwBꙕ*ZNC2f0@b m@jl.Ǖoy,;] yF8@Qq04 te~!B7̡|R6*oH|݊Ǥ~{(6{\nۭ0oD[}n7KƈӞҶ:!WmUSsaFXd5L*+/x"SXJxaQ[׿u '^FRN(&]PAg/YO* H#%)M(uJV9[C IkudW >gk-o7[3'b[r" 0$BbΓB$eQX1N)^H/5Rv9^Vv!tEwT(s 9heqT8Z#Mm--'H]GT1!ZU4ȯüHN/R!A u'H hd,8.9LLbCdfmanwa7!^/AvW6#y*,QI^'ȑ8<XdS&VssvN"5))_r>j=XJIgt -Mi(Gf))TiQ ⠆RlҠ+=3'- 4/`anelԖ'~Mvæ'ϭ۔{K:?{d\&AyZNh؞ ȹDZ?Le5f%5fNP,G,BDa0Y!zڧYHPZPҡMab{z5rhWh>;x#6=}SM-1pkC*zڋ9'P+al+>t u)]k7Cu䨋ZIۣwI&W:C 6vRѝЛd&V~!' sfZ$xŜy+v f ŕ{Ŗʽk'VQOnrfs@ 'Ak?oKLQ*q )bQ DKқ:{{-[9鯷nA|H:xI%ћm:F[Vh^Vi}D͓XeO3j3lRj+{ u{Ti6å;Yi <DVj|)R)&+L9iͷ?P^-{R?|qPzWT/H8"濾B΅Q~'շsퟫo7o/"%#JޚD2TI2)B $.YHIسW}sovlr.::Aoħ3# 'ן>yit̳[E!\n^9/䷏[b` 8HQUt3!&-=!%"NQv&R$*Q9O{lkk Wl۩^;ŶbjS}_m!CŶRB9Ŷugix(úĖE*q5ޢn`Om X Jk+CPyW:bֺ  h%n@yIMՙaAjR wITc*Z\A82P$Hϰ,l[/'#nЮZZ1n=^"4>04vO߰"҂VDíĤ3z՜8#^)A4 ]pU莠yig8VcctY?;}1'&Y_x>J&k7I9z&x8-~_59|c#;XԹ㬌m`D>Rt4iCZ ɤKnZ>3O^*"RQ#2mlS<Giz)@ 1J1f#m{/~72zd՗/}Mk4j¶8@֌6n%S}*HlSH]ETg'J:L39!R] ڻ= C4ux?[ɍpR ֻخ/l˥C|];w9JG-C^Xc!"K1BK2䀐 dJi4CIwNuUtҪJ)Hks :E2 E'Q3j 1[U~Uk $$cu (P*-h w촒 <_v;J:4R$Y+ZoT\yQhN[_gQKlI##P{,iK|D׾ i)oJ~V8]r԰L'9$2`Лڅ( 8@)Otox}L+n|36GJKʊ7l0YLj,?+!*Y;'ӘBN8B=%M^u)x Šy7moL=X l tGE>^L^QB{Җ ܰUnĘBևNБHFS ]Bև.+@wxI! ؽG!# u[ ϒO$6ƑƩ%cܑcs,SZr3ƾ5i_ҿ#=I5'qso.lXwqelmd+ 1rLWv5^5*HnrqMf#uʫ@iy*uN=/~( ;q۫<ʀS &W^-:5km5q%c^WzQZm:P}ŭZ0AHmJu*!q5N#i8I;,%5:P: xO¤.C%;KRA@vf;Rz >BD5"x}29St0$)XƁPoFuTR7Uo4FWjW*S @h6r.ALJ;Ga1/:2Ů>a5DVӃ> Z eۘI( UXsLjuҁi#mv 3ìuZe;dUs}4cH C(]ZI-5n@O߼3'a~uxw?27yP ;7ϼo?u>~x~Q'wo~g{@/.ٷ%_sq~}͏?J67<~xs.upv-e'C]J|ܕPޗ!Ox#ka y~3߷mv?`?fǠvK&;RQ 7 $GC9ٽ7GtH 1½MX*B{OKۆ{I-ZCD_ uo-ΛDz\QFI;ž]i%詣9OϰiӶҿE3ґi7r;sv n^+m[ <_N5_N<ۉ1 Q^;sfaGƤVpGvY_VVkBwL+!7' 'p󳎚z#%Yw´f職f1+*zuuovZWw(H|Ӂ]>Dž}{(o_oAڋt$m:{]]yktpάX쵐s:Ehcbenݴ.T] }J<HGϳv$I_9zHj]{w$ɇK` {2dy ȖlZҘv0-d="YUKddS<9Α%ã㮍{oHkN6fYvlE-thYsyBk/#!A 3¸1Lɠ:!9V~R"-qAGZ6("p@:CM.!1,P!15mhgC9--Djу\ UḒb~^I\۫ʣ4[z;Okj ۏE|ӝ'}]ѾM&&Qi5#ϿHZLS(4fͦ<5:T\dʐɌS %`@i5h\82i$ɽ!!&NH`T@A;*{)Tt3*R/zqq^ley-,o}nDANՉ^mԋtp |L L-wY%:]2dVD8 mi‘b$66II6IPu˹]8pzﺈ8ڗf8HMEn:1Pی#WnHvRntH 1B$E>ZݻihJP#ۨh--:UuT(17 ]K9W\3ge6?ƽ3T5OG3PP9{a 9F^8J5 QHI Q{c^WZ \fGAM /50QCBʙ|0V;Ǎ02sx\$_ͮKݑQs/S*_흸yrvf}}qAdV͌7Q7vÏnLiݜ|N{/^:k zK#H Rͅ!@ iVj)(d\a .&9x~xK|;^b/1mǪŒn7r;k=Z }-VmM礽njy}l[#VH/PODS,ዓh^0w_?3:k]BR,*:REe ,%ڡPH%u q* B5'0n(0g0 gTvm;&+D~R\2a,*bTy`` |0RV8= WʥJH΂0;=q=NɔOVj*^'DZ!$O%yP7AX>ċ(P̐zXROUJ;Q0?fd*z^[O Rq|ejv[Lٵ*W(&S5kR5Bgjx%Aʇ%^ m.X)dv/^vT2r,ɾ \AOɘv|*{%&5 Ll*@J%-DM; @p)?"5ּoS)¥SKuESƩ#\q.*RaJŕPԅY^1T謤iI(Q%1neWRN R1w*SIKѢ;YQ(Sz/'9a"TKԶ?n5VS<"~0.)9|d~']%|4sXc+>e1B#g=ֈ<_$;F`zؗFK$D{H#ǐ?kGBI1&tϠ`\ l~F0أv*%z'&%y Hwދ; I (0)O'ztQ;r-H;p_ynjDgȪ~h_- 𑰿a\iSv?&s:\toikEWuۍRW4gj= w׆3aSA]xʁBGl=V_ hAOSA0!1դx^hdqSn(HwJ`#(ܐnybpYuMn*St0J)5ٰ6Sl+'SDl4Km)u.Ojs2OjfУ&La{ާpP 2:d#Μq )[/2%+e]s ծB\:t qⲮQW@Ba(ZHhd*RõȲCJ"ADT̾лklCQe]c׏!#c݅@qJj`"DE `Yb`MKlSLKU<0DTX@29<*vVx V$*R1vPlQ+*t瑣]D{$¿;{5U֩ R+Zvg~X;U3W;Ux& BAX / Pu0;T&`[`|}d j.dq(7O_fsLnoQ.`#^o+}HgQn0дE3B(aMO>24/՜ rN%mܟ|NviyC\:]QsS@O7/b/"͆L.6 s28X0*TG49&cpzN+}(6%98)0'?ك^& !1h;ܭP9 7A}頾tP_֡MYBM-.T`L̈́֔O5vn)˥ [2IlCQqpv@._P<}VE^MJpm?xp?ܹQNʠ>FQ|8%eJ Ȕ(3<DF̤K >hl1AIMb0Hj?bG/dThi%Cş "@@S:ӵ{zT)`B]xeFTьdR L2G(RE"]<-ͦ ݕPg#*k̈H%S,8iST%,iV*+L_"0P@"B2&YGD1@RAXQ 7kºU@ `fnfn2Q6Bۀ/Y4Oֆr)qZ7%,hbe:mX!OkiG][wOֆr)y# !/nIN.MfZ\U; Fݤwo>7EfRrϿٵ3L%QjiyqZzyQlV;@eefvo~xf$5?\ݵ늛tr׳Ov=\ɯ̯ugu<<L01)Ϥk[ND4y!6h-x=L+mȶЗ`f^S/qKĉDqq ߭nJld57ɴ[K[nUqwivkD(iͩ: nuUUm+/iDwJˀ0 D^pcѝK{59SJ 0q )bTFѐ .i{\<~j&hQE V=y?\93dkOSqVlq *QwQMCTy:d{pUa5WZK&$d`2QAwΟnmF:d:'`oq4B!\~Y^3{:/`U."}~qYzB LCJ~sn+ja>d:pw:a_2?vRK} ^'2*N+šH KKZ3-a38\>D}z3?~!\>۽zwT׬sX?pZttή gJ.7Otܦyj|/ ^zZvjmIa:+DWZ.`7:pj%Ψ7JCqj'7NCpj#/KqUj)h6 蝡MG3sPq+2M 1oĆ`xNZ#̹~I, L&aHWnix'ob=yh'[E&. C.T̓yK*܀ b1>d|zB %Jz%RrJ()$M ER0*{@)x?(-5#|i{$ Iu,嬄 ,Y0)y{fQZdQxw#bIaGTnл>|'lYYa1[`3ChL?yhzJ?dpnM_^E*wN#O>iTw]\ݰhZ47sk~Rv(/GF~lg^J킭=J !2 X B%4S|4oW-dϢ{Z{D^I6̗KĠSW)Xjӽ5PC8sV| dߥQ.\VکT!tLbUJucyk7ZlHB%v?9-]x62;!P#֞R<[1:h*2)-|Xt-VNN> rz5#g9mueW!Tn#0N␓(LFc+a$'R&6*TJ$-NR.d|zp&ISbb4 /$#,%t(RMQ!ƟF8#8)/FZlXn4|c&Fc(sMKQ%ܠ>^yMH00"ߑwo~A8"E7sɃf ""@8(ZOCOo.`2Y3~u3vc]ͧG#()~1oUcO1H1>=f6?B_נN]v`$lLJާ(Rb -\w?xioKL]PLI f9N񘙹o=kNg,5nc`k܈% R3D׶w}F!%:"PF"f %qs66407/n^/Kzv?Fy8܍{_ -Jh} i`3(na%.Qp<2,M{1>m 62'+Yc‡f#7Xlᬧ\ ճ~̓&QMY/SQpGcJ;D,ijRD < Rs"Gejr.H˫J~)I9L'*xda^qM %1FHMjv g9 6WgU6c/WSai/QCY7lsX'Z6~s^|aR26˙CG;X 0_oBX3\AHN8kY_+4d#r}rc_nˍ=|)]? IY/IBő$҄a$ Q1gJJD 2˜iG! y5z7tYʭw#߆Fqs1Y`'օZ9jq.L&˻$n|G:41҄HUcrAۜddz|g*Z'o<{YnN^oYm8 (XJ1 e8b(,Ēa.N%V,Q1%]\#03aݱ`a pI &I@ Bqh$`MpJA8N)é! G^(2E.';s$jXP ITKjRBlPiQbx{hR,<& \dӨ U;.h/ý軌Ha$c.e1sV$NIaD4,B xF#s)cbi)H!(J`єQ,`-Mcy$0a„'6_(P|EǸN.Vt4OPln'c .&wl{ϟfHa_߸2rrą'"\{ =sDj,8Ĭi!005hldNHy`&"Aft7kìb`0% V88 E4e+ya׭hr&HKi`C&l),xұ%Ak^#V6I"C)+Ad1pXXCV Ԙk&(Z4נ6^Вaґ[R-F*މ<x>jOޯbuX[VjyB!z zx0·|;R-k%sÖ"Bڭ};gM`j柧aw,CΟ"@U.sW5]S؝v] jM2=` s)Y_ٌu!v5-dN@@~w O5TKNqk}3k9ec7Ώs`5f0ALL*CO)4$7Xl);D@.Ikd '`SoAS)Xy]E' ;Q֟ObN{uY%w\ή./1cZ]Y,e<m?q شZ%[4> Ck'JRQkL'MBE "f0 u=O7H&5.(rK9=7Ѫp ~s *c]O2we@" Y ;]Z0. gB/B dˉnSP--<bcinn@m&HE޼D*\,&ӇH!Py4ŇB})x!,i>Y(%GfIҤu8KT\pZ"Yjr iB}ϔfh J3`rcy["3f6d"ϔhn4}XT=(^B ؟GI~lr察=ǁ,w~5[4ˋ[?hVHiYc L<(G67No_Ho*vEXVR|~sۿ"c2ȿ?v?| Iq T'*bC׉Q<΋ WQri@z:h+Zm;_h3up}~6™;~1dJEڊۛ QKӧȜ2 98+w<$=\Q0=煩nCs^ZK!!ȋ9\ŽcF%hq6/7.f uH|P┄4B[V!ja_*Sa:PiBJiZAqskZkwj+oxkDl;8ʭQn:(ʭP>H e]9edRISӶ!' 0D?}uDϟn4^QE!q].E5bTdaݝtgD9"ƶֺ@n-`rP1#&ɞ }9-vG525}wQ\q1ŃtѨ 8梏{E#ج#\a@H+m$GEΤia5bf 3ԃB̲P6ߗ̔TZSy(UE dh1ǣ{}z&<'>hhVf2-Ԙ4ԞX1+I7 #R)0 XBE]4SZN#0NC&p;a$)@TXؿ&TRBb45kN%R-s1k~yu!HUۿ>d><#F#ctdѽBw{`f V1,-UԂD)غPdV\D*XgԐP &,y@R|k<11l:AhH>VA/#CIdy~U;v4 ƙK2f@AF؝& UV7ڛp񈔴BJ*$A8_N]ew\~ZLnc|9BC ;4yK`"%;&F/7)-$<y,TQjS#mg{QۃrjmsyIs< x~hBwW+xsᏉK  w>YS0]s (L}$i62+ 3v IQgPsVjk6XG٦{Dekեmo%Sk N 4|Pa C:C$pilaBn.ZKSr$ÍEH WY/F}3(1Uh. $J!2SM"F(B1bqjma!#4ըD@ K)ORR$!MH" V$B :U)g$qq(il0q ) *Sj\I|Oo|rEֿl9QV;On=JQnۮ2f_>l. hkl7n Iy c{xmР@o}3Jjl:3v$R)v14#݌fru*I~z_Z q /Ml_vi)7l(9JEnF\ nٻSN*-gF\TCz11qHtzjX&gei=ɻ$d"Wכ̜J= ~ؿfh_{yZiԝ#g=?F/N"餰gz`c&Sxު!*q/2$WdbU8 S'OWᨦ STnT߿O5HbtD<1vJ@qH$qt$ISdQ ux.uO ՠZa3KYF`R)2VJ^H"HIU$8KaTƆ%Q< ٟKTsϲ5,D^rmzRiA k%GJ];7)˼zb.AJQJR*TkRp$U9TԤ* 8LI_aʱ@ ;IoI;9өϋA-jse{*HkﳓcâwވDW5xydYQ ³-goH̻MF~&#OC}*H:Eg/:EyOF#u!ʅ cK:}غ!͕s4S33$R'}©Ean墊ꢫLtSa5|ckt ^HIU^q.*N٭rQ6_IpQB[[>ԒjRM. BSE555Aa٨E=zSWj?=ol}.V^t.fvC>)NF+Us#!ZQ-wtgava /aŲp(T^ SnEZ)t=%_RM BPFٚ!撀hgJR Χ&e4`Q(\$1ϱ2 >cMӡ·))mW=yߖyEM5U"F ԕɥ% e;VkGJJ,n UTl hk!;)'3A-,do2tE {Vg ߠJ0+3 q$eŌ!=}7מc\>=ȘM]Ϳ"j{V#zɀ@w4X%DKA.V!ƌib&$#8CNLIW=L&e_@,c`JQXb$KLA"R"aRC$$2ߣpBMJ I0Z(*p1gZM~I[K *թt.gWd%'WJ 5~#BWJ`-PPo %V(%:șZߜ-.s`!Q=yCb$wՙI)fY9 )E' 'AJQJg?& I/Փ2Ռ)F)gR.BJҌjkRzR %] oE{ ǩfRzR OJ흠{qz:NL5 (~e\;VRW)rҫR~R1Kꅔzj]zR7p]Âa=+ofR -~R \m V?} QR` B3(p[ehcgVf:eV/A2]aDP8 G#TMg|+xlt_j_,8'ǭjj RA..ۗ c\7PJTɬ2,h-oFNr)ie~ȦN?uMhr?;hx^\!)!yj&/zFObqu HSRX"#e8v&$rڽ(+0Ii̤:T" F8ŸSk_u W7A! e*ĀEIh1 DKp 5"5 kPB_&1Ť cRyμuѿǖQ#p:, eq̓L6́B~:vaJ#%b-TD10cQ.{h$bpbBym u/2ɾLe*_f;8;6 j/ÆA%zàqM=*w !*HIHq#Li CNe!vѢ\cz ,T9څݩ ͎B7!IFXY2LŅfK_%µR.\t3~q9̚ZOޒ:e־<ϱ,;+_q;ڜfQhiaww7O(s(_/W&_irBCF(GںYT]nQ9[;ckk,rp .Db} cyu|*W#746[N80Yj+<\o4Bp7^9=U"*P6׏kb(C^|8{=+lyIWs8x;Z./.<̨/[賵J3=ӒovC=^.w|IAWBX)n(ݙb^a'N\});+W Wjuج5o[cmr)[/M,Db]soZלj|og||5P2*F ?֝cn(ZSQt'FZ՘8m}=0*zK^n:/нv#j wwk-bNO6+@ \Erv6TkpӐfV+gSfPp>ՠj/qZX <vsS@EGpuwkpuޭ~N^7c'7]a#+O՞H.ƝnW@mc/S=)Sͨd^qe넨y//j0D\ZRYq6{!Ǩb2AJQJɅPRgT N W,x];wzeS=)Q CRfe,]RJRG5CqkRT|]VG^H)fTR^ey,>(*{!s`j~fG:X/<$ !ilB WF@ yB#%I*Sny@Y]Xy˼?nL$хu<&0)R\GA4IAD Xb$K9)Q0HDJ2r6x'2ߣpB!,SGVV[G*0C˃ml[>b$U^|Uj_yٷ4n=\;\?o3ra^}s2 ׯ(Hۈ6bz#{cjZԬhA$_[t]$)ط'޶ŖۖӜ>wIJ8t1b:FWhLU+{0'&/8dz :fݒՒZ[˶&VUXU2<ڴkcT}W+z# KV0FTŒ0GpXH[yb@Dq eȠ4_"1!qSqiB 1JGgԘijk_OP.@>l77<,:$WU:#u3y'_CIqw,5nt0 cV@r}O3ϛYt*6SN?hY!KS>kWq/J&X%MkOw&!dol<[l24~9䎅d\?wB" J(AqŊ1RI.qppuI9ps%.ç%m:TMSr)Aü "@U U;*9[%U[DѪňI.I{K RpwAϓ 6@"lމNQowiޓpZfύi=;{ $l0 E.TTE!~:srqL|:HwNBLy$`T /~͔w1$Eub5EBGL;iybB5F2.Ƹ?yII,#G^/>O./>j! dԺl"o`@c]WH~ q׺.{uDvqE2HrYyՐhtN{J1Ϊ~H\rR3bxbJ8L>pH'+,tQrU.\x֥9()dhriL)%(L*D!.0 TABOهm̘Va%1)##!~Cbh™ 6Ĉk(_aݫ=@6cKj M59J@ֳ{j3_~}[ :O_a_:ٮya`>᪗ fŊ4`X#HA6n}BĘ~*G'y4{#CI(W _y]X ITr*vA^g痮-|028I\!)Qۃ9飌﬈vY/9a\Tol44Ɇ7fաg0v0Yto\eozabzs} \'O?"H;3Va]  &IM c@sxO-Zu{&)yo^M=_Y#C&HM H "f _ dȰ0|M /\ kUsaxG/ 1/ B ڲTiH[VfM'^-*+!ǒl֒f(z4 dV`~Z^$3ܨL&ax6&Ƅ7z઄_ZmbZn CYucCH(tʐ֍Ajޣ1u 3[Rb`e!Fx3GOogvմk+ZZvm! .Zm\EAHY˾072\S'T$ @)T(R@23vqvYr7P) * %eWξthuVO֏οնc\ 5x,APͥDKbcuSzz4]<j0qXw "9Q4cCo!s`~2WA "_:oXQ^Vr*Qu!* XCIR2֕tnY_C. ydY*0O7'%( rG}d /S;?C#,"Ei: Y$!IB(!hFA*!P@!&awp90U(s@HīpujT&fqjPW4$L'!KbN"BAQD ~(rLd@gNfdƬz@e×: ܥ㳺VM>GHPR`:G[[9 ̐&ꃿ̷k|`YC5W/gI + f qF%Ry"_TDh?jS܀>fj@mGt˲/z5L/FNG3"xһi=w`Gn}'07pV./{b  ƹMe\|GUA_Un!cJh9m."*#!S\p` !` )ILĄ4;@*6yFC6hD 2* c$VQN-"EH(c8!s&"/%xmrN b@ql!M(B* T`a Q!Khň3$(aZS}HbR\/Gӷ]wk4:d:Na&!Hz9$ 콏ͦAƝ  > Q^/g)/j‘l\:rZ@?g U `>Sހ>SA,140Qp.qϰDz~ 9[|f]a1YW!]Psf`܃oģ CFZk&\)8y3$D\&,2!VB=/#\#)NO)kŊjLb9erf@h:%gWw[H$S1^M@Y}N}Hǥ .KB Yv!U SBYWme<,@QBqfAԮst=T oO?t%<={ @SJڻǬ9`כ2+3y qw岜jQ?zl2.^Wz6M,1j\FdU6F:d9TF)\%l߽1-fv1~6#nD}n̩`hvhWK U 0^Sq'^H8䝳O-_E拁 QFRИtOrnUp;gQ/b:t#yhtkq1_8QU!E=xSA1fcJjf7ć-/l~mޫ{d`4\<^] gAtEWc.ޱ0B}s y=N> k6Xqa w*DZDX&ᾱ!9o+BlԒ[U| v)b]V`ѽ Dy0 *ǟAn'*ƂPq B<_h;!fO7NiTNQ Blu!edwZʟ%ʁq{qvɵ"x塩ܵc;"Q[UH΃A *$Ud(!C `Xu̒^Kxcֲ\6cKa(wg$L.1kj\bX2 e;/K>Κ*p싁Ee; M! @wUp;gQ/?Ane[ڗ)kNיtnUp;gQ/K7&H/.;F֢F)e(-8J*8䝳O9@/'dQpncQp.o8G sndZDP  b*\ h/ӊ1$\1%Q푍~Cb cЄ3l^#Ht^鹻/0j: '#p7i8 d2vdμ/;.cX>Z|=糗D@78G1Yρac9#v^Q91sXB{ݛ p Q8upN-hp')Oc>ױQJ^8FQ'f͛"s> Hס!:95cCha.Y0>wܐ HqԹ 5zWSZ_ZsH^i\9hAfx yռ aɇ_';1. O>4ca0cKQ65p5=8AQQ@2s5Q$>Ewmg{L-_𨘖q%y{D;Hci&Ӵ ڹӴr|} [4Ɖ*fS UjRgZR'8u%/G^[vf6OT)bɶq:^hgV%I1"Q-YBiV`(o)kŧ^/'֊{Z%cՊo%ԍ۴:QXR^"AҦPF-Ed"␂HSc)RJP&T09B\&@"Dd0C'Flb{wM|'{u6o!TŁh+#Y#mmbnܛuˇQ8¤lE8,ٟl*¨)d>gaGi|/)&{mg>/2#J( Ҽ"e;%쑄\=*HR.!&[1C+ i\]'vFUr|=]%'m+@ЦZPp.L~2{SSR@ QEwHtaĪ[MۦUծi 2{KQwmei뭚M<[;sLfdE`Jr2[oԃzCTy2E44@㠻x0dB{kq<\~,Rp0XvjfaI)$օn5`q cr6\Adiҏ6 `,=;$*SN"ky0 Jb&յNlER؎ؖ*ƶ>{$ug$B(jNƷ^;͎W^y\)?2i*T5F12W`G'G(pG6J3D9X"Z Y+s/_Xk]vhyi+Pg^(n! ҤEɋCtYR[.SiqG)zYHA5?4Jڊ疸,ˁJ̩N0/0M pjm ań˞AX9ml^6(%>FU"V؍,r^vafe<)BἬaۇSjhD%k񲤩 Ise+ x8/+TQ^3j+qFrG*]C/,f\X%&[w;ߠ^fdY"G '~b?Lm M? ½3{gEC!X :GHKu"Sbx4IT'&*,M$ZZD)7ϸ7?7;W7+rWެCl3*qMzܸ}ٴ1 p)|ܖ2@UX"RI R|J$&֔SQp)I5$XXIu(CTЕ < I,7@v„`ޗMo_:Wﶗ  Q&7Hآ{.vs'Xnr]oOӿxͶ?{udC?kOdW> | UͼuE5Ƕ7|v kʇtts} oF+RiѠ7MF'ʆO0^L]x@ i*JօHK7eݛם~\OeYAJM"TQ9#m ߸3a4/w \* 가A._)Y!!Zjy1_Nq/ , gdeϒuY'o 4fxk-UŻ,==207d~'}(bgtx^Z4&6 v)ñ"xPe 1ra"M6AZu"]k [Z !ﷅvԬ4`} \y|X3pB< O?ۨ' 4_NwKvqV W\ q2;NK'fѨsH?^eօ}ʆfhGyiۋӴdnwNYc n7dg{?h,c}:fo*Zۗ%+~{y*`;ȝ&eT>Thq7I@\y̏j(MlD bH"aXĆؤ2G I0bVe(`UX:vRR^9:lBP3E7twRk%.VzVJRfZuJI}.5CJJIvݰ}R߭KM A+=c+7R\nCjJJR,2_*y'twRbigĥo%@8gTʋ2+\LttY?R;h[M}aJZRR Tک߾=8 ka6m5tU*qiH ,ձ""X`GuX"ƭ RZԢ!ph508= 6`i^fX6~7lD)$-%8&0T2qX'ٸxrB.vHLZ\#ȝF۝nb0S\j%vޑ7iKJ@ Z46"-^5ㆷV% !Lrܞ`C9%c^dٴ)RHa%GC E3̹_LX#K >JC0A9-3cױa9Z"4rn{)\Iқ{nr1Qg&w\:`zUɅ\-i CAVzJ 5B-FBvw>H^!A:WI6SpDW ,DYN!vwi`tT"gX/\/v%P"^m98gX`Fz5CVqJb-e+F 9%)&4:=EEؼ8.sXQ t>p.3ս |~8 #858 dor)7%a'wu8-zxRJJ| )0+d0 W^`Fwn޹)@` ;H撀)KI%EJq1!:J"sdM#&ȢԻ@Ն*,TCּz3xWTY<ثʙ6Vϔ:'("&*`JY+cD jS21B{ΔK@Qzz\Yf$=u(CP!FI@W*0JB21B[WQ+8MIbEy:<ըB(iy4VXJ=XyCh$ژH+emK~ƖsX)Sk*L)'8MJ*jIcbRTKI`#A]|UnT$,bțC&1:_{C0[#1ʛĎ:7c85|P9^wK5"|uiQ =pR8ys.%J O'-f$҇y{9!WLm!h4ij/³"D3c~,zF WJ Tu|WY4[kO8o&90Kfr=]{+MwO=}[4x'%MU7ى[Gq]hVD9Hюx'ϵH(>xD@l26gxq nM8^[|NJ QBN jT9aVN+ ƔF$2-|Vwn?,zC}wדBpOe*IMUNB,IK-GkeR+*% -_~~[CjOyu?QDżNRVBpZ-eK,"8VegQiݢNNts ,& U3ri\À7{샪m䒻ݸdh2=jdƍ|WRK\cEi{)ku&';-@d!/D+ٔݘx7wөF"o61U5V감nlJnKy7p[UN6x1ݴD.Xkޭ=OVp])l PsIs>leFzGx gQIj(>^]׃~|=kmνB׳;=;GΟ\_u)܎ϑ\s1m10EkdY?=T1q3Q4̳{.DPEڣɶC,ΑxGZ֏ܿ\hHkSOCen[{tj%ٺfإI+gYijFz5l)8%=Nۛ ?:Ũ [Q!47;6*dC!l$VmuqU YN9Z~Ԩ8KIx^2D.{>U]r*Z{z T GXo8ḥrd/긕+/dj)ZBX9 5C/_X-dTV=Zj̸fcn]jA ;OɈ/a6J1bOOScUT_2bknUlS9zûI;ݪtQƻn˜hͻU/감nUlJPwSPwޭLb wn3UwB^ٔRB5EfF_y# ;u:'ݺԌryaĞ5#zQxNa[b.VzV,B9i^Oq=Ǝï`L(z >}N\;z䡣ʺ];zc4#PTMJ]4\^&H } !6&U`;s"ϫ DN7崯"'?Zةe<-i:8VQT(Q@F#Ҝ'?h8Z /^ 0| K;99C!-:mhI'uۢN6]sQU V;{4#gywJE0 ۻr<}%uTQӥ2YKtiZr苭|PP`/;Q]>:|˦II%Gv";cOU)vF.rq\.>R3 6ؤmjajp]’5\XQ}ԍEZ/uwĪI= ě}-5/ܸjQ}kJjzj3J7Ǿ8 9BZw9'D\)#BX(64f}QJB|c%QϸR,:5NU+P0M-$$X'؂3k&5قy%bvυi0Κ9C &_?E:+ `wZh$e<~RՋ{p_-zgfnhCLaOVVLj#2w?n_if#7@>%= d6A< :_s!%M}Эv3i["YxL#g agd=噒o Xw;diX8?Xٶ*d?('o[ Y~HljCH0=l51j؎5p:>eʩ+!YX@Gc`yPW;.`hvPd9c?|uQAc1ŕ9q RtQIWU}}KY?BRrc _x;I`;a)7?unү6D7M%}>їm^ǒ4N#0CPb%hNc g;۾wl1 @R9g-G>w!<~ӈ<+ `+5FavZ~N!FF4A t5aD۹[dL`:?{?^1܅jR7Cj=N](k>Xwۃ ΰU?4+%)RD%1 " &pm jܳ|)ɝsunL\AcSKc9ճM\c%ilx*%:$* zu:xxkѶ9m?1*̈еd^^]Z wg(V" Ĥ?s}KqR[2}iE>T5|: t}STA3[% N+-s3u?: "8Id@LaDR1 q(J)k%,\QڄxV4]η \Bțq_l[=X98{' Esy:Ų=k(')\ub'ed4 w'{iww)Bםj?lRH{8 o AƹϟЕPhg Ǟ>{!U 7rD-yK;6nAS*s|vQRG9iHg:e9;I٪D09࢜Cpنfsa}:'a(P; gWa{ VAR6O<,e-ݩ~3~wVs׭+`!l89Q1Fx?G dMyy ;ε'ͺwpvC%yϋ7&ԅ]0Ω/^ +1Y//-vm6m{_U SE޸ObSVg!uQ{dW)C&U׉6&L"dF!&BbD(J(`F' |OV>K=9R|WtG(;Q)t'}yj)!)LnGa1:BIBLA2JQ &Ly"ANcKD2՞gt)3omL5Sy[ط8Rέ~Q}]aZzZ*Hhj)r|ivP-yrZuPɏf'xq6?[g,{Gu.2/Оca4sy8%1-Nylef˼ïYeҞInĂܘT/,t~7O&p ; {c`G-Y(q"-pߓLݺhd'ɳ:47>y=%Owpp_ߘ7vޘ潉_o/{Y_sY~ؗ.gOQB78_3Y:"0y5ajdwٔZ+3|oyT5,_A:o sڻ3-+d.pݺrERV[`oo6@؝t0dsOl2aQ?'p|s׽vڎ>|OԾȬ^ŒvǟzWp'$fp5OV4D,)~Kt ={^rcdKt6Q#UkC+- MD)D&Ǔpqɮ[z^.reޙɠ7;CwN|b@߬Gv;z=fj,,;n:tpF~oփQgI6O^Ozw`oM7y5Lsg 7_hf4ue! {7kiNA_>Goa?Y;NjW ?zdy,ǣu'Jwsvh:[|&ۘ1'k=RԑR(m(bYj/YX~P;R :ʯer{#Xp(pԇL՚P5.RJԍ;2 xc X9/-e2ңTY:k kk z t;V ?-v (= -ғji9M5RSQ=A@FqL@r) |D&*7G$ *֫r'Tl|JpUKus/kU*;Y+Z0^UznДVGE5<EH Xi܃;*4&#q0i*֙/kΜs (*ikFQoK(QR4'6L1' WQ\֧pG rdž ";N^>hm{OQ/PPyS{J/U~O88McV']ϵ.ὔYxݹ'{G#AwgΩɰ'v>llXihX1+܈[t=@]J^DX5h F@7:jqKR,8&D x#[Gl\sf/dqC.m$bƲ61 ueIL^"[Оp'ur&Eee QQRYhXnT$9=:rt*pܨ 4.x!T1iw1QBضv(b8/@٥'n5vm 쵓?W yW`\6嶻y{L\IoI>5N 6L;-m]氳rB;CV %˼T)#+H !:v{/vgP/6N6 ֕v xfc-3}bI$gJ7pG*Pef`_{^ o w;Y+I;>?W;p b>7ϹնNjȅ*~NzR*OB=q$0U|IDT#%G|MVBy2 !kP_E:FY2_[:> :M 6b9]h7Vo\M.Z'Q4 M΁RI1!Q(" U!J &A 0ݼ8 .{a?C &w9$koyiMT~yI?z:kXyz.:enR$IJL`\bbH2D&PiU)'XiNBTeldC%';KJ}sQfS=sQf@ )U9w$Q̖S);'ef4]d=n#/e3~ܞEh(tG\KmĻQB)#EfcSg1*O4oL'_W>9^%qs2o*p u5^OkQ#N/4^[Ƃq%x! 0zgDa4ل틅uGΌn8vLHp Yh9=ejt>P7V-oZ޸jFsu(d 4a%ZH&$f&NIhL8TRͭJܪ//fv&16}OaFiv|"3Q1$m$&Vs, 2 ciHšVnXbd"RMn"G2F!QHT!0T2Zdž0#0d!F+"D3ѩ7/um܁Ȇ@?8 ,Ľ` )8aCRI$sSݑs*)wI 7m3-1 H^qnw]ci <@XG187$&[`n\e*/V껾7ƇLW&ҴȞ9v OU8 Xʌ]Qƿ;D`y _4զ;V;cg 8M4fzؑ_ /tR$8y}Z@V"K$N[eKNed<IJ)~ucUX_)΍sXZm;YN4Vʜ#'DMJ$g\qѺDP-XBN6}rɓZ*j&ɓ6E2Z3i4AaM0&̾zD} P&wN.Aš_b(ʏ:]tKBg(Lhۍڞ!Jc~Fh:LjBʜGl s2j)G4Jo5&g? ZUrbOS"Saj;y-0rz¶\WwƹVcq*wʾ$pgSx]TG3A2 ~(@M6Ƹ?^glV\ X#Q_]x[P?@P+Z Y4;Mz]nĮtu~s]!kt_#xO$0NQ#P{DSҧ{\/vߨr|ӯn.OcUq]ο)Б8pjίOc#[pr(V#ɭ@β%&Wbs{:~AU?6_MCƥR";VQcCԤG)*@tR*[T1!d!G) V P~D{+c;g)~Oq|DTos2ѻbe^F J)1@Nj%BBlc5 ?WqbVysԴB~Si !j{oA ɍnfS椞륏i{]>cJe=vsj{*mǿMCoHm.m.o~a?M4@kZʖ1lQXXoȊ ݔ^ٞAxOb?ɞc̓YPTRjgŲ`g)L»]#&'viBnl;S5V^q҂պJ;C!<6[x[}*LIkuS 'f"EdөQ ^4!~F_r]Fqk05wMqa~l6]o͌=b7_^~(,8D$Ю$rԬ\%79?Xt+M>. 4Mger2䐃`FweDX=#Jv ʼnܴrN<%?N:uج7,%9N[zceL&~=.`13`FU2"j^S:4 eb&DF2s"֒y#$eiZoD3tGt5ٮ'nL wu;Ɓ(-oq}NN*#qyU3 fdw"NTO ksLfBy@Ҫ2d36o 2t^c``t Ầf[S"G C,"tZm3iجqmO 2whB# qЂhD%"+ΐ&M&גr1ȕ\BAL;OzXMS\l):H}Y@ZPw ''XsNf{95"3}(ٌżts/ -.: b0/ BR daN8 _'drA"H-e4FJͥ;7S(%K3֟^I?DdI@,ۓ2oVӸ5(XAPt}9 yP|VZ>lNWNr}cf/F+ . EyȰ))E"~MaS;R -Ʀ<_mVdz t1!K"IC% 9`r{Hgk SN 86[ZbsQC'sV.y7&<7L/{S(ZRGnrVc5ƶ?w?LqQf" h*P=ed\s7>4=TP8vwU<,#~{N7ӼY||cij$ b $-VgʹJָw w~_tT'(ʲ\;jwbЁ51+zZq y #ѓOA{+Ջqg+Z嬶<(f,HJx[qǺH7vip#_ Nn2w ԓ mǹ}ō2'Sʵ<${r&#ӂz D6˴P$cƋL='nKMm%NgW7åz3̻O:s |9(Bopr\rwBa'yVkRh:K (NGϚէ zIaqui`?^j!_ό~Tmu~YL2z7$2` ]fǸL9C\x(eYnl\.z(&ܬӄg.EJ)g  yEByQ}uiu5qZl#igٻFr%WM.eG;1;s9O`pʘ̼MY.RU K<>='*,M)g^pGi*HB K`9u4tR1z!" iLZ`F07 +luM4MMN>0W ;]r0P3r[<NG˳bI*u^ރ![t.Ot߼Ţ'( ʎj1W" #PNٱPB(LFhg`>|9y2Gh}5gPo7h^ÖwG*ޮG~HvMAW:Xٛ {vh@QK5o`v_`xYxUJg-dڟhA'Ϋ<j!ܫTl#Eu)h)6@*댗zwe$cPZR|)BGVbT ubVbd({>^u9Pn5i=rIV@fFTP`k^G ;QF>A *Cv0xZ*dB9$ F ű:ƫV TXE@r)07 F4sTCyͦ1Kkh;p ]+c %p!"Ey6yo̅hlӃxL6TOBכ>S踌R73˅Z^_~stpbg=S ez=TZGU)E%(s?*!P-(M68sp8N `z  UZ=F$vx4B)7R֠±@Ѡ|m8IetfS+O1/- HਥBX.,˨ L%9 `H-~U:Szof.q0/e0ۢr(È<*cdJVW,'#&d;tW f,hP+uXI~O׸p}޵\w9Y^tHwcYPRhyw [;/N2nfq ץ荙'*->GZ0/k.'s4LҲߣ$_FCI|hG\eY:]*IAlIo 2栣af{c$hedm j.qp],,˝\o@Wh.״a V-59}u=h$ `|XvZLm BTn|R{dueAGHz9O;8Kaɞ;͸Ffĵ?~.'oˣd8X(Ӎ/2޹Յ~s:WO9ޢ-G-vL'ysB X[ eŴ<Ct2L+Can08RAKUaoœ =䜆M_rf΢v}!r9H@4!3wjΤ[{B٬[WNn>kkWLw|k =OPrn-T8qcXmUO׮Zg B?$XW)Pvy6=\G r0Ef3P>Lg%¸p`#ҋqĠp^X>޻[a?X6elEw,g^zWt 㹋xS_sr;a,J\X;nysd}u{x}೔((+\Ύ$䍋hL|غievŠ}GrcQ[nմn]HV2Ui"cukAi&CiZɾ[F5[EtcRr_bat\ j|l|5 YW/-&i`Frj*Z?Ꚕ>g\IyDrS װ+ ((ηVK%iƤ v 7mճBRng uAAU:^L>$TEPZAbna)Uw˧ G 0wLZ$n=$6N( qUҳHà^AX۔ǂM1U+,}TAb3,.B uM}/bwa-bh7-jgmlYf^za|ԪJF(nvѰOI_/(KU `Z2wb_%6ZѾYS)/:r/܁qm%S~l$]nm1#:mĺ\Uxak֭}KϚ֭ y"J$uӏU^ºŠ}GYr=F֭}/֭ y"LbFk-=?WWgjj4 ]J2Tzg.zTMi.|ęp\.ytDC?! w"lCuޓ7KR9<zs, OQw}wր=X+Gn<'B;Ia~a|yQLB~4?ۀl@[wn-pK+!{5w 2̜46P"U6O -v9Q=[9dO!g6Q-7ylΠ$b2MэedH~sӼ*p?|z}R \1+!W1DXMDύщ &, t-@مNûoL]=#zkقg>Ҽj++|[ ǽ _MƼTJf熨F˻c@8Brݎc{QMՆ{w , "=͒yˁV9˔ 2HFZ,ռHeZY㋠ bԬFSi8" 3Bk5B"S*R5AP6nf6(W*ULe76XHp-"ӄ HIȂ7!$i-EoV8+s"9=?;4.DЖHPT $Lp ].7?FOפ &iDtN9g- D$(1uE To~p4NFa mFdc\ ΒV;@7o )V4j]Q:&uo?[ceD %lX&BId@<A"C钐uE/Oҷt[ciGyC.VKH8>0r`@kL~]H){h֘ЭޏAq'̩0_'ԅizQQ?蟗}٫'WC5M%y$R) cJ\egxP]/(7?IHo"pD;lDi$i,5Bg/ WЛOWk}6OPg(bR*c,qf D0]Ѯ̽++:}jF}ty^s}>aiP  `X}lW.-λY:Ms7^#ڢ,X%+L!@[ ` r ]`@ ToUs%E!W>^ s0 $JO)4}+ sɃԋԋ}.EKi>8hHdRI ce/I75׋Li&Nji{&cu<{ȩ,gHQ0%P&@S]6`OI_^̫{,*?R?38OgߞD>2&Y6UIs1uB;£gE Ō?.kR|#91av2%Xj(![l@uڨʢl 9G4 )2=C>{j]^ ,hxM/A"U$|1̅LNSIIQO!s _@)6I~~`;0TY fw\h7RUB^!TH̼ l65VUC$^OBvSl~?[d"Gs?\)M z7]4?{Ƒ_!iCaYC؈|9D?mETl!%(R ɡDej8]zaͽUTrq&¸9t;s6L::Ɖ+u>H 1Ô` >!+^u9G`?F`׶~:̨+_t $)"j)֌aac^0s2S{BJw)O{y`^ P`ʁWps ^9©/k>W )Y?Oo(VD%8{J,hZ5 _m+Qb 37)8\:MQ\@EYBXA=Z!; ;/)JJf=EZ$j`)c΀N%hMn ĸ19AȝL?z6@Y5׍7YnAKtR v``R&^[ L^apzp[w~MrPDxwp\LE߄}#`-eKDQ7$]i U QMٷDx\Iʑa`K&2.JPh$HJKL%2}b.Qg[ "%3h>ka!P1cA"T$Iφ[jCDg[zh=acD# #}Qh>fYI>j tV |N;ɮ훢 ƂmY~Q!}T±(\19 %ƥc\qvXHH:Onj Pʑ`c". BiG|pnT)U&%İ)N;,RU%!R {N^"A``) }W kB뻶; A;rs4IJ3Jej'5g`h55"&.FC/r fjH%,l;RUC 5sB5iT@ԈmRoGM iMY1[v/dXk@A&i,űE,q wsJ⋙&| Z;[4/[9e2]Rӥ bd" PeA~$E8Hjj (iC5VӵXM:l]&DKsK{W Zml3s3..R8!/\+66V~jSKm^hU:,[3bi U(8s3CPMMW);S)[0(&6>*~248Ņ8EzBS5QOmT*HA}, IbxRIB/HqW5m_i8O@@Tp&CՄyӾ+L[JduٖXINxj)&/DR3_3-S j:_QDkaNMJa:la+ŋ\_4 u.EAxɋ|(bX"2Ff ߩc\(BH7O"7?΋B |w|?6,8aL@9&gWW^gf:WGv>ےof9i6m lqc_ɍ%RrANO r໏G;~̽SR@r|{]T{;FY -j=~p݋_!=]TJA ]wW &=S-B*֪0>'N~1>\7沢}=I~{p$MMxvCJ%Ǯ2D4bJuXR%,!%BK)+9(.$Նwǔ#- ) qfL;)룰%Rwf*HwOP^]"+SXȒ6Q UB9[•;Hwac%atL8m71CoG_o.zI{?VzR=/0,_F#+KRO,K${gՁ9.G-@=[l4<5,%7]N eͯKN13 O7Z}~%Fjgc8 )ˤ`N¸T $5xJxn?es^^UI*ssk̖{!gH/MPdR~)\GiWyY~3 FaQdzaqr!NghS?h+rŴD⋠e L}MRCH2 3BDH;T?h5!8$'XPwh+MHri!9 :}z>pf!.%Nze{gH#GN[#y_FTHcRՈ\!y- d$ZyJ$HI 1"qTK(()l"@gфT' &)qψ7U93_!ȝ2w `L0i_H"xN0K1DbJP# QzfXX'y|jʀ]+="ԔCҞ+L)}ՙ [6 oՆA-`A.33뾄E4y*$V&a{ie^W/q:/L3b2Hߧ!{ܡCU ֨4B*k@^*뫼"LH_WI.y{{kKO?wwzi3zG8'xv\ {xRcZxVfJ&VzΨzE9 :UgT1<)Ÿ_o㭨ě D$sf %hd'2e鞿ݰ"K!7y,`>pJNOn8"(ڇY-8ӯ~p]^g(.ז@7Ɠ6xw]V3.NIݷ{AE 4䅙Lo<>S4gާ8V?]ز]P>򜘂$ݐ fF8J.|GL8m z &Z2G-8Ӕ&xŝpWߧ VI!t]m {d4Oo&,Q nBkx\s4ax 0YZ2ӧ#yV V+Dݣd`'/x{= ٚ﯂1esf]\K{.? g蟽b.Fll#t BOnt7Ԭi˓c`(0O>}l"w+-7%ڐc[xLRUMR/ Xw16&L\)ơ`.l<:{Ziβ;˃5I@{q&4$mz|n5s.ZPc 5-+ԍ3 և NQk Vk`kOPs` MQ7xtٛ_߆oRʥRal׽GI4Xqw 4/-PMiZ.pһ 4]cyBUzI@iw)ݏB6iϜH5k:kqZޕ9)L+EZ.dRֽ3ej7Gnh4kn[yZiQ+Et|s3[`Ʒԝ"ѵ/W%~/S_azsi/*{ƿ\=d0?յxE']d j˧9}cĭusj1FudR8zb siXg1"nH@LEjfɖMW#24|мTKUJz"[m Gãg\i^P뙡i΢ y B/ ɐcV2cvo+])&e(X"eVrF+J7I̐(mAh:abeFBchDT: NH)xPx>"an@_,agbk88݀j\lUu-l"/]fO[w,YvRסń Z:UTIKH*^ס5JY_} -H;]e7o/0M њ Qh:x<.,`&JJjY$%vX ^95۔*ohO> O;[7vI,r5Epytj]ɲaqSo6]NNSVa 6$䅋h# LwUCd5jFW;f ްC߻'hĻmP\ 6B"=؉xcv@юJ$Ռzv!9fމGV8uQPfDruPܿh`&}RqFYSm8|9/Pـ έJ]]' B}^eO.++IgV a4eKI0j{})V);&(r-DlOozx>ef!7jѵ]4*A3t<-%hq*k zhp M 㟢u0B9}Q s8Sƽ/w6l}rmʿITRPh^S CV) b`P] "T3V|Oy 幙Lc?6Yu.Hlu=ޞfJJR.R.무ZA+=G+V+ýIX>?lR-+=k+ʸ8&ЮJFkd 8 M6PAr)%DvZhu?&զJI̭:КjIA_s[=Jxs9t?6L^mmu9zʞЉOJr'M:+Db;I5WMdy&bYiCuIU/VzV*+W|Z'aro/TU+~y91cST /gmY)gEߒIX>?lR \r3R[i |ZVE]LQD'+Ruʰ榗u {Xq(CU}jо-*i!Nݽ;+S*&]U@&UG_ω};e1 ]7_֊E+0}Cl3W1'VI_ڱ둺2VĴsPhvHƝP_^< M e9fHzSYuuSv_+YGX|֋xi"M_s}TA7 =?Ωv4{NjC6o۠VPJ[AnS?&Y6 V[j].ZRXv88NlU^yZ7kԮt%P}O=@9H/ Cuqq^iJ5pˬuqP ɀEgY#;i|+h/6a*}YKkJsa!s!ɚhc+bY:Z*++eF37 *8{YNῃc9}&:1ÁM%59G.>%i^Bno DBxoEpe]%g62"-wٕg Lzjb!*6C!U+@|*2yGO;[BkmH4yYH>㒖*:zVq#mc* Yz ^i@jVO!xݩɚj87J+@2HL5Q#^u!Tӏn2p0jNhU3QeLHx#Aev#Hvgδ!VT3ze40+ͭ )i<ŏ^Hw;2ֆj- oڴj( evKsLԑ%/^ -(Ju15Qw6I}RӚm^SNA ؠQ0"e.e- c1I%Sو t21gj[1RʄEAw&VdN02Y*9rS D$$6Qajĥyo,HUgRM'aRYiC5PvCzVjӪ:Q/M Vv{ѻ8ea#NS>½j`R;FHTu⢔!9`nh-ivGU2=x~g'({G-u}oX ]ZUEOժyT,՟z1Ѯ:/孟v1HNULc >x[QnhpqKs~KK/nZGn"k/aL!2V*9'>)uij!Rj,>%4?/(?J(!udF7OK?)+&'q0F+ V *LQ9bCcf,v ɴ3:d֞WCP8OYjE]6@i~wׄVh^ƒSCnTY kTY3FݘW\Hsd(S*f*Lϵ9u^q\XIȩM x%#pq"'2/w Sn->" +nYHCh</?KO)j6cZ>px>kx--ZH64 >F{vL۠NP7]]'^wR~b&j*<Վ B 62crZAfUrŹ L{bW;wT(1"LVfu0EQkcgL$ Ȍ.Quօԭ#3jdv2BftШ<Ot4wԨ"3z'}kC&GsbAa_Phb6<n58!@9'e08zbLl *Mx9q2ΣF2дf"o82w>Pǎw0<O O59 bok ~p~2VVq KC V|h|LULS1.|Lcnoo%)w|MDT?=}̌FEMHϿV)V?!O *~U,XUŸF7KƬ&udYKT @SJQی_ REҞ&~Oos >뻾/zda_!8ޗ Yuռ*M -g(mAydRdb1'\0E 2u1!TI\y9$8GȻJnoyOpyv{\;r8۷ i̮~CoUUҪf "o]빐vz.%t*&(P@jU5BתGjtR=2j]wI |}9--*E3FH"X*P>ɷoO脒gDq. L&&[LV,';7M ;Y>)pF:6 8*EYP"ڭĤ ng :߽[wϝfbhr\= Io1XY'7-~|J֛L'*D"Fn"*$ NdÔ~\d49'bԗ͍pjf>/I>Vc_ Ɯt3Lj̔^RHDpUӦ 3N` OfF{qԸ:ylh]4 .b͂>3Cڅz:ʧ 9&Ez]Zj[E1\,,p"3*A/+))QXZ>T ӧOW)n8cGCQDqWVɎcЁ-)=ܺ/ecЀ |/8NjK0(p61 dLͨ&}Q17J"KI(Us+j#Q"2,a%ȒλWFN\/&,1eu";Ɗw%9k"H13bQ{J̟3v`(;Uw\ ZGe%%!* R81ļU'tCkf,ԡdPxx5ptqyb%UBwWuZPBV`7X.se[`i `붷⫏Z$sSUULGT2gwhAӺZُQLdzh 6jɴ2 w=gQ`wסb`|5E\%'`{oh2Kqy :'C°(m ?~U(HZ2@(@D{*DsZJ?VN66uڇ0]^faVF+YΐHIC!sN!88$4.p(>Älۓ惁qVbк= k|kd$QDp\9&30HF挥ƆGH XR$cRfO g |8ej{&K1ns>7^tm,,$J1E:1 ֈX2q1@id9z0Mcm+dLká +hN48x"3bCN%|(mʶQS1[^%6Y4R$I37PBD)!YN|:='hQ0nt c@!(RJGĻXPO u(zm- ^ZnP(PTŃX p0xZcMZp7a)w>傶}Pt [j<3,uy!9C)q8xDem;{Ngڟep(A,hd &T%/ i]Ui؎52S8wDeN+1fǠ`-"Q!km\:эQ5\6$?k#@6X-5\=Xv2L q<3a[#@VcChH?>c!#VLTcZT?X?r5,)\}T|9eXV ʛd" H "8yz>zt9`Nm;QgyqwܧFJg7mIKЌH2q8ByYȑy?k)v(3Xfn1vhjXNIxӉ(9wDusJO@\Mѳ%VEmf4 '.(Mĵ [I'p2@JHiyɚM57.S.ttS1FiH:Ww!YWcn4%\ǣPRB0ԾȠ ZW\n//* K0L('x V+ 86,HVni[رJa_vz?ΰhs}q^IX(KHJ)q%" D0Aa:sz<'o1㕪. V xjo04i<ZCodƃQ6_"!O^+@!1," .Fu,damui&V$*`<]ONѐ90Gi 𒃡G5U.f-ȕ 8F}̉+%Pݡ>\ beeYL֑Erxãj βb+*>_'҃H";,F"VRSkM-Ӭ3(6D,yxy]*]C䌓c 7yOl5cNDv֖uq4W#W6^e i0 p3,W Ad*y?^C)!LFЍ(l?&I:=!]W%q7Hx :pŔa^ɾ5gL}Ay唤FOD' Xr֯`KL{RUwj+"ɋx``XY !xT$qitg:9ķ ,3T@3NgH$ap췣RhXTS&iy 6g͆l{gm4:I\ t+gC"'g,Y'T9z*w&˅.)", %8b{6.b8ѶX/T-k[[bLtq[L TL7Kǃm@՝F}*Bģ]d?^TXI \pОEN0=gB! 닪`XCQrI|;{,OMk6ǮqΘgYmueA0XWŰ, @ pMT1INdzGֽ.mwAt??'99]: 'gޚw6fwxB2lƒQmIvZ[3kFTgy|oهIo:yN:L{N;Zm|7] fr5N)Yǎrc=a:߽5}ɘv,g|G޿??M [Ȣ+B;1%a^0? %BɮPm^ wXmN"gl rix.:'EG%\*܈ Ѽ[ϋޛA,Q7c;A/z8;$`ER//"_3{Л7H?7pJ[ĕy2ˑ_'/Ydկ(n78J(9d$.>X +d5R ꠣMJ̴=TjэUJU 2T8/>*<&ttj~\O /آUjP bw\S  !^ye׽{yB_ooqÃ{~:[nzxEG#_p׻"fRݽ˗S"Ǖdq~l<蚎A4R^7-~ n\ĜhwJeEpůmXr8l/C-A.A4$85𰭆wP>}:9< O2vK}]T,I!uvFavI)Sf!z#=Gׯ7S>-+P,羛d}YQ_ҷߜǢS񧽷`'-Gn~9\y,]s 8JSkf(_wضdCaS[$޵,"%H&~1`A9YD_-˒ =CJRÙH6+Þ/rf qqgۛXNl9~_¯~m)F_ƫӽ G$nskj$lycqccAb|Oնlc56Nc^OnmqP|/}wn伫%2ߑ֒@~ǑxhgTf/$|twq;~6|"k PM(u<<Nz6 ^d0\{ I>!LwYLa2m M@^bMJ{礓&۵EC! E8_BߪI yuF`](TUS*IPY† w4:~Z%2MwW#*‚9 rn.Y^\p[S[Q@Q/v\/9P9R ]ɤ&#X< 6Z#cwȠ z3 >!'N~H{M6rJحUIn|u;ƒEN8s~Q!۽eGBZ$ :f%t_a㭀gv 4\S~7rê{I;6q/)]ss.5B~ך+/y$0FHB 0-CDB(+enyr^%v{Bżix>z%VQ5CAB9V2X{꣐0) az-@jի۱\UW\8üwv3H1`%8Wzs\glv0̻$hx!|}F.GD͙&xe8B\Fz]6P@If!ʣoWB9rFFZiSQ fXe=cA{;m^F+`k~^RTa<-Sԓ߈8qֱrjAq a>+<&JuUyokkz`Ha"o~nw}KtB(_=Ez3 Bֹy|B#Oߦ f} _{sŻISSŒj'()DF S q#)rV,9s*Z9՜L 6LaD0@ҥF֚=}prH\M K"wxUO^?lD߿OEZV'Tqy4T:Is!RhOZ9_%ײ_s[9͛tq a<[3b'~)cd۫PBmzKO K`+mP1eVݤD2lX|'+aČ3#fT1uL6JC4GO"D&GM(:9 +Zqe}flRv3g0.1MÒ5Exw21%YT <,% ӘZ^+Y݆s3`$1.mi*kr+"͕-R"%l,]P6zT$JՊc FX3K9r\ Lyr*`mRFo 1\$a6b$.A8mf\4[>_`@"c;yjO柫Jrbn?/?ۿ`:BD[^q>՟]lR6\rlK}Me|VU6G%Wb(,rPbsA mf: B 1ށk .dTaLS)L2Y&2@:^V(FZҪvwkF"-ߝ/!q-fzی:bz:_BщLJxDKQc7NA2YgNE=+ڈQVK@`W| G9XɪkE b`56!8lMLL95z0Kjwȓ oq@5 i{j@ɋy̛db%ޭ N2f@& O%]r3Mb~#' d+_E0彉W Q]k`!pLb&K*Β R&E7:aroQI0={SSgs7ӖCjz ƓG-}8 1ZRcRWP7Z&hBT .U8}DJcdb$ႜtYN^CFe\ޜ.: ]v|Ys|aD#)o_eS70gwy))ɇM8ȷ.T\OɎ'xq{?qr>8Z=v2Nj0X:0I@m~O*ۄm񩍐^= +xlai͟><`qV=ZUV=F\$ra- Ěʭ-/;KO@K|8juA o^C҂5O^%O`/Ա4j4^-un ůr$(kcV5'j%ZCv db(r{\<}p-nFGM4j߆}ʲفWp~7 UfBaP‘Iá)NS-j.Iםn7`o! ea eh~%ZMar[6 pCjɿu~6VWRҺvz儙qeQ6 ^z˓^)c%K^_,w/6Cq:azt/9hn^8o{? cL m~m-|ʸq.ThՅpgկ $͏f~2F%,3dc߇:Ā @ IDL_㼦!ZGMM㟩x7"8;V%ME_Y̻(n]X;766%O |((b2([QknIJc,䝛6e2)דbk}V9Iz.nwyhE>;jx5/?}~dWBpMD RTK'Is1r t^uġُ<^WrQI#u)xE9ʜȂ:ǒni Gc#9"7Y$n2ڏay6"d0%@1Xvkİd#E{1;F@(N Gqd3l, z@)c\WBb,CtnEQI@I S&NIORO ~ aEe'ĉE27@|۲ ( `%;uг 29g<4>ugykY99ѡ!XLQKBgS!\<&Ub+) W9W#i 0fz5gH(]tIr;eI10J>'b(CE@XBڑ\xr; Kku]晷FE;*Dž\1ݠi~#IX_kf\GyGdqٖj*,kW\+Z3qxU j#9}/haz/hd>r 0MQ4K+jJG\#A<%4O$( =(C@|G|.Rp$X' h`$AF׎DC|M܀P.TF1o?7߲[{N/|_ljcN'?gj{㸑_b3ml"@$qIںȒd}+Όl,y4j*dw"|5Ae҄ldRb|+Eś:xym5Y'v"jy=sga_%,KH--h=K%vM%v`Wܧ%M2UH.D5%PO9-K]++eŅ7:`(;$IIGt ,{*U/il0ȀPթ;#b\NBxX?y,Ñ60 `Q+&dg,"U1JUi0шiH.XiE$BRp b,<$F`WHD&_ Pvb?ܙ Sv*꼹8>5rM9 ]:z[%n^؛#G+X~^)GۯJz$7nO'CB },[86f7¤ kT(3 \](>pbGy-BWs3xWXh8ǚi$|]K7lkQc#xFHu]\v) ~0U̷G̶e>:a]He ?v,~rg=Hv6pZ1* 9Qq{O۳f7O֟7vb67d4VCkoWsOv1*j l5&㽺l>7?"ATD eȺu"U,pnWM_V[ >dRP2i4$8'=(` H,90ٹt" W !}@!O¾;!1'|w%I},]jFTĕsLnpw?O.Xn6Yj%ُ5h8x]g/?/6[񋰫 ])wyZhx׋>k!BHIߺj>dcjB5N} 6֒swTЩT;Ɏmjb]_lkݧ=嚲ÆɏYOM4˦j,sw"bc:mxbZ03w=n}X37,xw Sx\ bL'6*BZ-?=һa!D;۔+ޜ:bq~yiss<FIQ'M"1oxiyNp`Ozhw0=aoP jKe!RJjQ6tR+l)VNUp"xA B )dd8#:hGJ I1 ^hҔib.U IJjaۈ*ޅ<Y%+F{C4sG܊-4QjBi[׉_F+ iwiDȠE3 "ivh+R͖lgK,a,Tai#qxxhg̼r>ll1kEjA@"5V0MN4Po#2#RɊf?b;ǣ]`5n4}5rJ(Pdz9B!Xց afVnx$LinabR- _;l7 A{!]>;4܈;9vjGgS}-qi ٯg7g m'oߝB6:=6%pyy7Q>]__z,*իumM{bNbۜϻN@Ko_/suȺ2|{ @m+d45sR#VUhɸRhF[[n͏$m] 1fwSaQiJ*KvpN,NJ2Q_ڇTyeQWl%5 ~4d0h B*Y/薥kSGs zX{҄ôV%J/|iI*V:dj)^@V6|ڀ8H/h_bzVyAOm_u{aavո]R qxv,}:v'ZjycRxJޅԊO=Rڊ'$D] KcmiRKOr1TՖ^T`.t5o_۪.Q&3-5j)+a0.q{Ya/ 1OWEIFRu0O=:tnUmWQ}c3:Zx P%\Bt>oŦ^GJ/\jIKjvEiF2-j7_G6#OK-L9+u+clJ,L!&`!qQE04T{Zffw3Z%4їLL41}ɔ.S"QS;mԦLWPF0¬]#Z@ࣕ=kj;|o>0){!r ^ cS Z2b^;M8|^MfiXGo(tD8/Z08  Ԍ#t.BUH%JD-I>{C$0mVQH>X;DY+ iI2F)FF^O\EE0W2á[b(.*$ں09Dŗ~;Apkbq Y\rrq4/p|XD釃|hwֵo.է`ne_yʐxFW.8i>=Xw{I~og !i*u`FaFWL"h5k'_ L"\!Dai${hGj`ڃDZ))V,mlp V6 ,W@HKgjJDqAu~@))ə#,V*a+4B;a8g,֪UIh!Gt m/`Nly™{8ۇ!=IVJ1IhD *ѸOɔ[+ЏES0UǘqE iBv4Ќ\<2]08e'WƧ:羽DpL+ C N 13789ms (06:53:36.880) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[532815127]: [13.789889189s] [13.789889189s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.880516 4675 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881351 4675 trace.go:236] Trace[1114930983]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:23.093) (total time: 13787ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1114930983]: ---"Objects listed" error: 13787ms (06:53:36.881) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1114930983]: [13.787954496s] [13.787954496s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881384 4675 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881561 4675 trace.go:236] Trace[402083612]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:23.121) (total time: 13759ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[402083612]: ---"Objects listed" error: 13759ms (06:53:36.881) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[402083612]: [13.759644769s] [13.759644769s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.881573 4675 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.882205 4675 trace.go:236] Trace[1089945325]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Jan-2026 06:53:22.504) (total time: 14378ms): Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1089945325]: ---"Objects listed" error: 14378ms (06:53:36.882) Jan 24 06:53:36 crc kubenswrapper[4675]: Trace[1089945325]: [14.378137134s] [14.378137134s] END Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.882219 4675 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 06:53:36 crc kubenswrapper[4675]: E0124 06:53:36.883476 4675 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.883735 4675 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.895199 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:17:03.564524987 +0000 UTC Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.921870 4675 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36434->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 24 06:53:36 crc kubenswrapper[4675]: I0124 06:53:36.921923 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36434->192.168.126.11:17697: read: connection reset by peer" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.049329 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.051577 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" exitCode=255 Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.051622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b"} Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.151172 4675 scope.go:117] "RemoveContainer" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.216834 4675 csr.go:261] certificate signing request csr-7544c is approved, waiting to be issued Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.301339 4675 csr.go:257] certificate signing request csr-7544c is issued Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.496710 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.500516 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.525124 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.880750 4675 apiserver.go:52] "Watching apiserver" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.889874 4675 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.890417 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-zbs9f","openshift-multus/multus-zx9ns","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.890846 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891490 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891541 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891552 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891595 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891642 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891662 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891714 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:37 crc kubenswrapper[4675]: E0124 06:53:37.891741 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.891940 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zx9ns" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.895781 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:48:10.29631895 +0000 UTC Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896325 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896323 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896479 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.896630 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897863 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897874 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897910 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.897992 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900524 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900625 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900789 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900821 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900839 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.900935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.901047 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.907757 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.922144 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.970043 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.986079 4675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990857 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990903 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.990931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991197 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991804 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991878 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991934 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991933 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.991986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992011 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992264 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992482 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992538 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.992994 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993010 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993319 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993339 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993500 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993599 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993792 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993839 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.993993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994024 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994934 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.994968 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995070 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995138 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995258 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995310 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995334 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995369 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995371 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995451 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995455 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995481 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995466 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995504 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995551 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995598 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995620 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995645 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995712 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995750 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995793 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995803 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995808 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995838 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995887 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995955 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.995999 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996020 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996039 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996080 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996171 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996176 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996184 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996266 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996289 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996331 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996335 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996377 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996401 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996403 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996436 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996477 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996495 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996510 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996525 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996542 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996542 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996562 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996595 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996703 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996973 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.996563 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997011 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997036 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997062 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997228 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997304 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997366 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997386 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997434 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997486 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997517 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997563 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997584 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997604 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997637 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997770 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997832 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997866 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997889 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997912 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997970 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.997996 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998090 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998107 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998170 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998188 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998205 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998231 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998247 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998268 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998318 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998362 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998383 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998403 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998437 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998460 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998482 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998502 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998666 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998689 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998794 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998844 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998899 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998921 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998972 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.998993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999037 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999086 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999119 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999171 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999187 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999223 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999240 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999258 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999317 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999333 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999404 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999441 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999462 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999480 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999495 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999516 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999540 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999561 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999669 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999711 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999751 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999786 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999857 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 24 06:53:37 crc kubenswrapper[4675]: I0124 06:53:37.999895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999920 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999942 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999967 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999992 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000017 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000074 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000144 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000198 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000214 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000229 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000371 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000400 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000571 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000638 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997037 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997140 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997227 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997352 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997446 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997666 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.997898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998079 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998236 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998456 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998465 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998631 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998834 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.998869 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999004 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999105 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999110 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999266 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999434 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999579 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999736 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:37.999924 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000047 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000256 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000586 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005043 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005287 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005454 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.005611 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006242 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006414 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006547 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.006853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007261 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007376 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007529 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007642 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.007897 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008007 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008116 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008306 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.008419 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009584 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009746 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.009832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010060 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010283 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010391 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010631 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010962 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.010972 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011126 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011292 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011516 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011888 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.011932 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012338 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012558 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.012586 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013122 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.000661 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013635 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014188 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014382 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014578 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.014898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.017732 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.017888 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.013466 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018185 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018226 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018252 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.018296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.019940 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020023 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020048 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020130 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020625 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.020797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021073 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021226 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021351 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021407 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021477 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021574 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.021844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022067 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022297 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022435 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022561 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.022969 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.036439 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.036914 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.037370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.038198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.041904 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042769 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.042976 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043014 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043041 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043179 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.043364 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.044684 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.046496 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.037580 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.046939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047222 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047469 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047485 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047612 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.047751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.048498 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.048879 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049021 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049044 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049077 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049249 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049392 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049449 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049582 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.049826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.050527 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.050581 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.55056764 +0000 UTC m=+19.846672863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051075 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051748 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.055840 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056099 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056123 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056140 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056152 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056163 4675 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056172 4675 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056181 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056191 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056203 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056217 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056231 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056243 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056269 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056283 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056309 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056323 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056336 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056349 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056362 4675 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056374 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056386 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056399 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056411 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056423 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056436 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056448 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.056449 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056461 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.056551 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.556507312 +0000 UTC m=+19.852612605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056685 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056699 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056728 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056740 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056830 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056843 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056856 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056868 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056879 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056890 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056901 4675 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056912 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056923 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056934 4675 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056944 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056955 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056966 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056977 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.056990 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057003 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057015 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057024 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057035 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057048 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057058 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057069 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057079 4675 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057089 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057099 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057110 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057121 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057132 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057141 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057151 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057161 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057171 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057182 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057193 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057205 4675 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057216 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057227 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057237 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057246 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057257 4675 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057267 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057276 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057287 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057307 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057316 4675 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057326 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057335 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057344 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057353 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057363 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057374 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057384 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057394 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057405 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057415 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057427 4675 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057437 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057449 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.057472 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.557454282 +0000 UTC m=+19.853559505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057482 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057494 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057504 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057515 4675 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057525 4675 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057535 4675 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057545 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057555 4675 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057565 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057575 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.057585 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061090 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061113 4675 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061123 4675 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061134 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061150 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061159 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061209 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061218 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061239 4675 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061248 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061257 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061269 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061278 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061368 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061376 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061388 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061396 4675 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061404 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061412 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061422 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061430 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.061523 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.051422 4675 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.068639 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.069319 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069562 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069594 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.069608 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.070984 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.071189 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.571156126 +0000 UTC m=+19.867261349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.073677 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075156 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075178 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075196 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.075257 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:38.575235127 +0000 UTC m=+19.871340350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.079752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.082071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.084802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.086300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.092946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c"} Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.092999 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.093383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.095990 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.101117 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.107094 4675 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.107835 4675 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.122586 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.135299 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.148400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.160309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161682 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nqn5c"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161811 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161859 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161875 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161905 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161927 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161942 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.161986 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162014 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-multus\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-system-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162299 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162314 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162350 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-cnibin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162449 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162500 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-socket-dir-parent\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162514 4675 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162538 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162547 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162548 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-cni-bin\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162556 4675 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162564 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162579 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162593 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-netns\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162606 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162619 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162629 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-k8s-cni-cncf-io\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.162618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163152 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-cni-binary-copy\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/581bfd98-ba0e-4e17-812b-088da051ba3c-hosts-file\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-hostroot\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163331 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-run-multus-certs\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-host-var-lib-kubelet\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-os-release\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-cni-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163547 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61e129ca-c9dc-4375-b373-5eec702744bd-multus-daemon-config\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-etc-kubernetes\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61e129ca-c9dc-4375-b373-5eec702744bd-multus-conf-dir\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163603 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163615 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163624 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163633 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163641 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163660 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163728 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163738 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163747 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163756 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163764 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163773 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163783 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163791 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163800 4675 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163808 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163817 4675 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163825 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163834 4675 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163843 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163852 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163860 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163869 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163877 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163884 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163892 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163901 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163924 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163933 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163941 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163949 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163958 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163966 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163974 4675 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163981 4675 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163989 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.163997 4675 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164006 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164015 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164041 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164059 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164068 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164077 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164085 4675 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164094 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164102 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164111 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164119 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164128 4675 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164136 4675 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164154 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164163 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164171 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164180 4675 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164188 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164197 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164205 4675 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.164213 4675 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165080 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.165504 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166006 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166605 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.166818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172566 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-797q5"] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172980 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173076 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.172991 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173154 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.173979 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178526 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178642 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178713 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.178930 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.179257 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.185625 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.185627 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.189890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2gx8\" (UniqueName: \"kubernetes.io/projected/61e129ca-c9dc-4375-b373-5eec702744bd-kube-api-access-d2gx8\") pod \"multus-zx9ns\" (UID: \"61e129ca-c9dc-4375-b373-5eec702744bd\") " pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.191516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvpm\" (UniqueName: \"kubernetes.io/projected/581bfd98-ba0e-4e17-812b-088da051ba3c-kube-api-access-wxvpm\") pod \"node-resolver-zbs9f\" (UID: \"581bfd98-ba0e-4e17-812b-088da051ba3c\") " pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.196101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.204454 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.215996 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.225640 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.237393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zbs9f" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.243867 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zx9ns" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.235605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268023 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268078 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268092 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268147 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268176 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268217 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268259 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268273 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268290 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268354 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268368 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268398 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268411 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268440 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268469 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268528 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.268545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.275641 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.306340 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-24 06:48:37 +0000 UTC, rotation deadline is 2026-10-15 07:46:54.852605769 +0000 UTC Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.306390 4675 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6336h53m16.546217601s for next certificate rotation Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.307155 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.332772 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.357980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370131 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370150 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370330 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370485 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370552 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370575 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370655 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370677 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370700 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.370812 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-os-release\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371116 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371148 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371169 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371189 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-cnibin\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.371911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-tuning-conf-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372033 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-binary-copy\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372082 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372523 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/562cfea2-dd3d-4729-8577-10f3a20ee031-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562cfea2-dd3d-4729-8577-10f3a20ee031-system-cni-dir\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372738 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.372918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373030 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373081 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373115 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373478 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-rootfs\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373565 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373611 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.373635 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.375038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.375200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.377086 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-proxy-tls\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.393176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"ovnkube-node-vsnzs\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.401010 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.401130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsws\" (UniqueName: \"kubernetes.io/projected/562cfea2-dd3d-4729-8577-10f3a20ee031-kube-api-access-vvsws\") pod \"multus-additional-cni-plugins-797q5\" (UID: \"562cfea2-dd3d-4729-8577-10f3a20ee031\") " pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.411327 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx56z\" (UniqueName: \"kubernetes.io/projected/94e792a6-d8c0-45f7-b7b0-08616d1a9dd5-kube-api-access-lx56z\") pod \"machine-config-daemon-nqn5c\" (UID: \"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\") " pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.421012 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.450274 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.476374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.483439 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.490641 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-797q5" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.538688 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.558119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581773 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.581869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581919 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581941 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.581916274 +0000 UTC m=+20.878021497 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.581966 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.581959414 +0000 UTC m=+20.878064637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582000 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582012 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582021 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.582034 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582048 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582035526 +0000 UTC m=+20.878140749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.582090 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582307 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582326 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582336 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582388 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582379433 +0000 UTC m=+20.878484656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582436 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.582481 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:39.582472285 +0000 UTC m=+20.878577508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.592121 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.629605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.645195 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.662752 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.816491 4675 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817155 4675 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817220 4675 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817246 4675 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817270 4675 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817304 4675 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817330 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817541 4675 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817681 4675 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817834 4675 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817964 4675 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818092 4675 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818205 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818240 4675 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818264 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818408 4675 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818533 4675 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817733 4675 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817968 4675 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818853 4675 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818129 4675 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: E0124 06:53:38.818018 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.68:37814->38.129.56.68:6443: use of closed network connection" event="&Event{ObjectMeta:{machine-config-daemon-nqn5c.188d983d909cb6a6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-nqn5c,UID:94e792a6-d8c0-45f7-b7b0-08616d1a9dd5,APIVersion:v1,ResourceVersion:26547,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Created,Message:Created container kube-rbac-proxy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:53:38.813089446 +0000 UTC m=+20.109194669,LastTimestamp:2026-01-24 06:53:38.813089446 +0000 UTC m=+20.109194669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817755 4675 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.817772 4675 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818148 4675 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818163 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818177 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818191 4675 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818818 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.818834 4675 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819396 4675 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819489 4675 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: W0124 06:53:38.819445 4675 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.896367 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:09:13.150264279 +0000 UTC Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.946904 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.947584 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.949092 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.950003 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.951213 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.951832 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.952428 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.953375 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.954051 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.954997 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.955528 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.956278 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.956818 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.957299 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.958192 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.959182 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.961104 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.962051 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.962478 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.964990 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.965610 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.966126 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.967178 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.967647 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.969675 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.970147 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.971187 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.971936 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.972844 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.973366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.974421 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.974941 4675 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.975035 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.975695 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.977186 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.977675 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.978082 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.979588 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.980679 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.981355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.982384 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.983103 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.984074 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.984670 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.985743 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.986777 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.987312 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.988296 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.988835 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.990453 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.990934 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.991412 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.992302 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.993022 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994070 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994249 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 24 06:53:38 crc kubenswrapper[4675]: I0124 06:53:38.994830 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.005847 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.018263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.032073 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.045892 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.063752 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.075556 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.086514 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.095992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.096089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.096099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cc87a183d9b06b330deaaedc509c7010026936da66d58f04da73f96a46a370ae"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.097504 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.097532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"10db14f503b4ce765839227d5b3f9b598a73814dbbab4916498b3f3b230881a1"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098863 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353" exitCode=0 Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.098958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerStarted","Data":"282291569b64ef33054937e21b774e26ae0666154dc0cc7efddcd09c997cddb9"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100320 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100366 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.100376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"a7c0234af696ea18ae15eb8998233c8dcb973935db2a39b3ba42ed3aed5468bb"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101608 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" exitCode=0 Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.101679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"62a8cadede7a21145e681044886ca9386d55c6d70c06dc737ae9eedf6acff8c9"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.103072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.103098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"291b9fb73cddf89d9377a7bcbf1dfa6efc6352bc4583ab62951ee27d655d6b90"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbs9f" event={"ID":"581bfd98-ba0e-4e17-812b-088da051ba3c","Type":"ContainerStarted","Data":"2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zbs9f" event={"ID":"581bfd98-ba0e-4e17-812b-088da051ba3c","Type":"ContainerStarted","Data":"e8fe77af85415ab9b3e86b06b0a7af3731b532aa2d9a6e4c0727fde92796537a"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.105856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"059c896bd8e3e3267509e8f0bdd993f6030629cfeccd00cfe994bf700cf2fe59"} Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.110553 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.125326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.153197 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.168709 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.187270 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.207883 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.234269 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.251258 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.268861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.282480 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.305446 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.358104 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.382003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.433326 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.454064 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.474046 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.593807 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594081 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.59404717 +0000 UTC m=+22.890152423 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594336 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594496 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594670 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594691 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594767 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594752865 +0000 UTC m=+22.890858178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594541 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594803 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594811936 +0000 UTC m=+22.890917289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.594868 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.594852057 +0000 UTC m=+22.890957280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.594628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.595205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595401 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595439 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595452 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.595513 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:41.595493702 +0000 UTC m=+22.891598925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.713216 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.723915 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.736569 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.742236 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.747101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.752601 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.760333 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.831940 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.833476 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.834740 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.896595 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:36:36.172930099 +0000 UTC Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.911962 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.912038 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941674 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.941648 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.941859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.942020 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:39 crc kubenswrapper[4675]: E0124 06:53:39.942147 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.973018 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:53:39 crc kubenswrapper[4675]: I0124 06:53:39.998091 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.003969 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.059104 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.076243 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.083629 4675 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.085312 4675 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.094085 4675 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.094254 4675 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.095333 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.097260 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111642 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.111782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.113754 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827" exitCode=0 Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.113854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827"} Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.114789 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121198 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.121452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.130871 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.136102 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140693 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.140756 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.142134 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.147861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.156817 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.161997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.162056 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.167813 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.169213 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.178173 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.182142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.188658 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.191606 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7rtdz"] Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.191917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.193707 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.193932 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.194163 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.195349 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.203350 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: E0124 06:53:40.203477 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.208212 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212944 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.212955 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.227347 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.240945 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.244138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.244326 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.251387 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.262252 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.274060 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.288819 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.289072 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.295258 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.301758 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303537 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.303766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.311847 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.314324 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.315237 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.326122 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.349939 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.362188 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.376940 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.400497 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-host\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.405989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-serviceca\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.406029 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.417635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.418457 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.418586 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.435241 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.435653 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhd5\" (UniqueName: \"kubernetes.io/projected/ac8e7205-a99a-4174-bd7c-5ddaa11f9916-kube-api-access-2hhd5\") pod \"node-ca-7rtdz\" (UID: \"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\") " pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.473890 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.485227 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.497351 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.506954 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.513114 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rtdz" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.520911 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.521950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.535127 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.571442 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.612073 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.623896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.651953 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.691344 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:40Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726080 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.726101 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.828375 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.896853 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:11:22.728544044 +0000 UTC Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931206 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:40 crc kubenswrapper[4675]: I0124 06:53:40.931216 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:40Z","lastTransitionTime":"2026-01-24T06:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.034220 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.123084 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9" exitCode=0 Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.123081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.125753 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rtdz" event={"ID":"ac8e7205-a99a-4174-bd7c-5ddaa11f9916","Type":"ContainerStarted","Data":"7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.125818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rtdz" event={"ID":"ac8e7205-a99a-4174-bd7c-5ddaa11f9916","Type":"ContainerStarted","Data":"f9c1a2d884f4a8d0dc573c5639915dc4adb12c2b635b2402a8ed4aadbe8e328e"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.126879 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.130460 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.136631 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.138234 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.154053 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.164965 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.186126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.199063 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.212192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.226349 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.237621 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.239612 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.257042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.267478 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.280518 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.291699 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.302367 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.314429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.329335 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.338432 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.341837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.370413 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.409567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.444561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.452245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.490151 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.497227 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.511367 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.533118 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546335 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546365 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.546378 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.556871 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.592127 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.617658 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617784 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617827 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617842 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617901 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.617882782 +0000 UTC m=+26.913988015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617789 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617960 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617984 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617798 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.617788 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618059 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618037766 +0000 UTC m=+26.914143039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618086 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618074807 +0000 UTC m=+26.914180070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618107 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618096707 +0000 UTC m=+26.914201970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.618165 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:45.618152858 +0000 UTC m=+26.914258111 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.630703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649183 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.649194 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.672980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.713225 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.751086 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.752546 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.791707 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.843164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.853642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.871647 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.897756 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:34:38.660105757 +0000 UTC Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.909581 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942279 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942339 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942441 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.942497 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942616 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:41 crc kubenswrapper[4675]: E0124 06:53:41.942706 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956143 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.956171 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:41Z","lastTransitionTime":"2026-01-24T06:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.957145 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:41 crc kubenswrapper[4675]: I0124 06:53:41.989652 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:41Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.031502 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058361 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.058370 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.070357 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.111698 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.135570 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345" exitCode=0 Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.135666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.154753 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.159984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.160069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.195248 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.232121 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.264181 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.275827 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.310919 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.350700 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366863 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.366892 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.393140 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.434070 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.468640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.494861 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.564511 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.570371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.581022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.594016 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.635002 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.671088 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672287 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.672688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.710751 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.750432 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778376 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.778476 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.797278 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.831833 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.870477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.879966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.880039 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.898653 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:47:17.805265116 +0000 UTC Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.912182 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.952652 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.982300 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:42Z","lastTransitionTime":"2026-01-24T06:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:42 crc kubenswrapper[4675]: I0124 06:53:42.994086 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.028669 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.084567 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.141564 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86" exitCode=0 Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.141629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.147335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.160173 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.182950 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.187382 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.198412 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.212446 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.232183 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.270980 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.289179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.310576 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.351479 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391484 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.391493 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.396413 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.435082 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.470537 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494839 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.494869 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.515743 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.557901 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.597815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.599009 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.636710 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:43Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.700955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.700996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.701033 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803861 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.803875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.899542 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:54:05.932109825 +0000 UTC Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.906534 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:43Z","lastTransitionTime":"2026-01-24T06:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942309 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942417 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:43 crc kubenswrapper[4675]: I0124 06:53:43.942443 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942540 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942658 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:43 crc kubenswrapper[4675]: E0124 06:53:43.942751 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.008951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.009045 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111460 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.111499 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.157907 4675 generic.go:334] "Generic (PLEG): container finished" podID="562cfea2-dd3d-4729-8577-10f3a20ee031" containerID="ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb" exitCode=0 Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.157964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerDied","Data":"ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.174661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.192607 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.203964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.214212 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.223143 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.236942 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.247692 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.259245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.271092 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.283858 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.298123 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.312477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.315701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.329619 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.344052 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.356460 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.367922 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:44Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.417637 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.519456 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.622747 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.725805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726083 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.726100 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829448 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.829490 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.900642 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:01:39.536102549 +0000 UTC Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.931979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:44 crc kubenswrapper[4675]: I0124 06:53:44.932066 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:44Z","lastTransitionTime":"2026-01-24T06:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.034553 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.136972 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.165426 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" event={"ID":"562cfea2-dd3d-4729-8577-10f3a20ee031","Type":"ContainerStarted","Data":"719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.170403 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.170707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.179364 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.193185 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.193668 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.214486 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.228537 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.239635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.241263 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.257549 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.269145 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.282979 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.295245 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.308390 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.326668 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.339069 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341884 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.341935 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.351414 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.365462 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.377745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.389096 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.401829 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.416062 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.426956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.439277 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.444556 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.450920 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.468776 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.482184 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.493312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.510097 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.523006 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.536880 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547893 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.547924 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.549974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.569417 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.582293 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:45Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649891 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.649922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655547 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655758 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.6557051 +0000 UTC m=+34.951810323 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655764 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655879 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655941 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.655966 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655910 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656025 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656040 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656041 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656050 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656066 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656058238 +0000 UTC m=+34.952163461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.655920 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656096 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656081109 +0000 UTC m=+34.952186332 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656110 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656105529 +0000 UTC m=+34.952210752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.656152 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.656123699 +0000 UTC m=+34.952228922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.752260 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.855267 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.900991 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:57:50.771972406 +0000 UTC Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942491 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942516 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.942547 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942668 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:45 crc kubenswrapper[4675]: E0124 06:53:45.942967 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:45 crc kubenswrapper[4675]: I0124 06:53:45.958246 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:45Z","lastTransitionTime":"2026-01-24T06:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.060486 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162945 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.162967 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.172970 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.174527 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.200142 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.217430 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.231320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.245022 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.260824 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265825 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.265883 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.287381 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.298596 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.315309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.328108 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.340126 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.354830 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370360 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.370423 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.375476 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.385678 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.404625 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.418764 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.431119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:46Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.472464 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.626875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728917 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.728948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.830988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.831069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.901747 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:54:31.2999394 +0000 UTC Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933215 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:46 crc kubenswrapper[4675]: I0124 06:53:46.933362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:46Z","lastTransitionTime":"2026-01-24T06:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.035989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.137955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138012 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.138065 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.175564 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.240583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343080 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.343094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.445880 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549617 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.549630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.653563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.654547 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.758409 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.861824 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.901870 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:07:36.110080088 +0000 UTC Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942375 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942446 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.942397 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.942639 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.942858 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:47 crc kubenswrapper[4675]: E0124 06:53:47.943031 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:47 crc kubenswrapper[4675]: I0124 06:53:47.964989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:47Z","lastTransitionTime":"2026-01-24T06:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.068359 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.171983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.172005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.172022 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.185354 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.188644 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" exitCode=1 Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.188690 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.189583 4675 scope.go:117] "RemoveContainer" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.210627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.226359 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.257841 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.275698 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.276621 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.297625 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.314938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.327049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.343119 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.359442 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.376778 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378458 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.378497 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.390377 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.406805 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.450360 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.466029 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.480425 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481324 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.481362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.487881 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.504168 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.518694 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.533782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.561745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.579496 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.584234 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.596384 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.615525 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.627408 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.654433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.671815 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.687970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.687998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.688036 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.696305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.715997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.730944 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.745415 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.758313 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.790568 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.892997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.893014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.893026 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.902257 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:44:58.193951048 +0000 UTC Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.960173 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.972086 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.982536 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:48Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.994985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.995006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:48 crc kubenswrapper[4675]: I0124 06:53:48.995020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:48Z","lastTransitionTime":"2026-01-24T06:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.010024 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.023054 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.033238 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.047509 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.060548 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.074657 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.090479 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099272 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.099287 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.103554 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.117912 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.132579 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.158917 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.172472 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.193893 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.197036 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.197122 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.201865 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.213117 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.225216 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.237775 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.269168 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.281809 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.294836 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.306376 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.320975 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.326865 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.331619 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.348961 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.362415 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.398620 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.480741 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481728 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.481815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.498576 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.511498 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:49Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.583958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584086 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.584099 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687649 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.687791 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.790937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.895773 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.902900 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:33:58.182721246 +0000 UTC Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941696 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.941647 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.941847 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.942092 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:49 crc kubenswrapper[4675]: E0124 06:53:49.942245 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:49 crc kubenswrapper[4675]: I0124 06:53:49.998679 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:49Z","lastTransitionTime":"2026-01-24T06:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101694 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.101938 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.202358 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.203044 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/0.log" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204259 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.204274 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206820 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" exitCode=1 Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.206938 4675 scope.go:117] "RemoveContainer" containerID="bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.207624 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.207796 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.221911 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.246305 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.261334 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.273235 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.286368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.299434 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.307156 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.317122 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.333972 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.366130 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.385627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.400247 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.410834 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.417381 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.436076 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.448296 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.478955 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.513655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.513935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.514217 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.518599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.535474 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.545977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.546076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.546154 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.560412 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.564985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.565003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.565015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.580428 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.585140 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.597335 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.602150 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.616295 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:50Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:50 crc kubenswrapper[4675]: E0124 06:53:50.616452 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.617992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.618010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.618020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720508 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.720591 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.823234 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.903464 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:39:58.241168735 +0000 UTC Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926122 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:50 crc kubenswrapper[4675]: I0124 06:53:50.926153 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:50Z","lastTransitionTime":"2026-01-24T06:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.000419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8"] Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.001240 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.003323 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.005687 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.021982 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028705 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.028748 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.038930 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.052838 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.073541 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb196d81740ec22a3c1613400cd4df2b4a38b8f9af8e1bb88c279655735d9670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:47Z\\\",\\\"message\\\":\\\"tory.go:140\\\\nI0124 06:53:47.404138 5856 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404167 5856 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404377 5856 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404576 5856 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:47.404709 5856 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405033 5856 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:47.405082 5856 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.084879 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.096464 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.116324 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.127960 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130894 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130903 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.130927 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.139352 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.150237 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.160617 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.172199 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.184900 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194441 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.194596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.202403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.212958 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.218123 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.218623 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.219070 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.231853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.233849 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.243695 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.254782 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.274455 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.284874 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295412 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.295842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.296455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.297519 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.301165 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.302026 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d143943f-5bfe-4381-b997-c99ce1ccf80b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.311366 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.315367 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ltl\" (UniqueName: \"kubernetes.io/projected/d143943f-5bfe-4381-b997-c99ce1ccf80b-kube-api-access-84ltl\") pod \"ovnkube-control-plane-749d76644c-42gs8\" (UID: \"d143943f-5bfe-4381-b997-c99ce1ccf80b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.320302 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.323821 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: W0124 06:53:51.335408 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd143943f_5bfe_4381_b997_c99ce1ccf80b.slice/crio-eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3 WatchSource:0}: Error finding container eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3: Status 404 returned error can't find the container with id eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3 Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.335508 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336199 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.336207 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.349930 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.360281 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.373047 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.392118 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.414089 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.425563 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.437590 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440696 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.440705 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.449262 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:51Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542748 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.542759 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645542 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.645598 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.748908 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.851831 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.904420 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:48:08.326923532 +0000 UTC Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941845 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.941969 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.941848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.942059 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:51 crc kubenswrapper[4675]: E0124 06:53:51.942105 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:51 crc kubenswrapper[4675]: I0124 06:53:51.954846 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:51Z","lastTransitionTime":"2026-01-24T06:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056499 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.056510 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.117533 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.119025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.119127 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.134320 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158746 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.158777 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.173327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.204010 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.204105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.213497 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.221896 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" event={"ID":"d143943f-5bfe-4381-b997-c99ce1ccf80b","Type":"ContainerStarted","Data":"eed62cb7d58f4b2db8ee812328df3f90792e1ca39df25f44bdad5e7f510df5b3"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.226596 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.235823 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.245572 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.256274 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.260426 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.268905 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.279852 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.289276 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.301270 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.304711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.304783 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.305535 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.305779 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:52.805761151 +0000 UTC m=+34.101866384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.314387 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.321413 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbqr\" (UniqueName: \"kubernetes.io/projected/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-kube-api-access-wtbqr\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.333563 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.344505 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.356532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.362481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.369996 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.380337 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.392623 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.411977 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.430079 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.441046 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.452215 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.464967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465048 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.465016 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.477041 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.487850 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.507289 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.519742 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.529011 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.539962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.551645 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.563991 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.566842 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.575112 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.585433 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.599940 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:52Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669146 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.669175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.773759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.774542 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.811046 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.811188 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: E0124 06:53:52.811233 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:53.81121965 +0000 UTC m=+35.107324873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.876963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.877027 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.907744 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:13:12.390665847 +0000 UTC Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980053 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:52 crc kubenswrapper[4675]: I0124 06:53:52.980110 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:52Z","lastTransitionTime":"2026-01-24T06:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.083668 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.186317 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.289416 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.391482 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.495110 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598899 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.598954 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701906 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.701986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.702004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.720679 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.720891 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.720967 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.720928253 +0000 UTC m=+51.017033506 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721034 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721054 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721067 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721103 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721118 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721100869 +0000 UTC m=+51.017206102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.721211 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721346 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721406 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721408 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721367 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721470 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721456787 +0000 UTC m=+51.017562050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721422 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721494 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721481918 +0000 UTC m=+51.017587181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.721562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:09.721538249 +0000 UTC m=+51.017643542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.805298 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.822429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.822597 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.822698 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:55.82267041 +0000 UTC m=+37.118775713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.907823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.907905 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:34:28.425675674 +0000 UTC Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.908987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.909104 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:53Z","lastTransitionTime":"2026-01-24T06:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942189 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941898 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942529 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941892 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.942878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:53 crc kubenswrapper[4675]: I0124 06:53:53.941947 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:53 crc kubenswrapper[4675]: E0124 06:53:53.943223 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.011632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.011942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.012331 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115361 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.115449 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.218866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.219133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.219397 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322611 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.322698 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.424960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.425101 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.528320 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.630643 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733149 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.733179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.835990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.836012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.909259 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:35:59.760788087 +0000 UTC Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:54 crc kubenswrapper[4675]: I0124 06:53:54.939130 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:54Z","lastTransitionTime":"2026-01-24T06:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.041507 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144280 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.144322 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.247498 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.350839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.453411 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.556422 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.658880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659429 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.659761 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.762893 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.842035 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.843193 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.843302 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:53:59.843277705 +0000 UTC m=+41.139382948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.865509 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.910137 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:08:45.209984193 +0000 UTC Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942209 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942369 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942233 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942421 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.942212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942461 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:55 crc kubenswrapper[4675]: E0124 06:53:55.942497 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967551 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:55 crc kubenswrapper[4675]: I0124 06:53:55.967603 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:55Z","lastTransitionTime":"2026-01-24T06:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.070942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.071094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.173996 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.277760 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.381219 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484155 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484230 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484270 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.484286 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.586774 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.689454 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791841 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.791850 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.894583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:56Z","lastTransitionTime":"2026-01-24T06:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:56 crc kubenswrapper[4675]: I0124 06:53:56.910935 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:14:21.729817775 +0000 UTC Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.005675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006150 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.006311 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.109175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213431 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213446 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.213477 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.318154 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.421350 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.523985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524061 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.524071 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.627859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628263 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.628951 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.732971 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.836960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.837072 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.912060 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:57:47.531241291 +0000 UTC Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941251 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941345 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:57Z","lastTransitionTime":"2026-01-24T06:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.941774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.941964 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942213 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:57 crc kubenswrapper[4675]: I0124 06:53:57.942039 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942338 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:57 crc kubenswrapper[4675]: E0124 06:53:57.942404 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.043873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.044957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.148391 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251628 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.251667 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355826 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.355835 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458293 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.458303 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.561187 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664081 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.664116 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.766943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.767069 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869665 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.869695 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.913541 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:25:35.442483698 +0000 UTC Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.958056 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972081 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.972102 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:58Z","lastTransitionTime":"2026-01-24T06:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.973403 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:58 crc kubenswrapper[4675]: I0124 06:53:58.989135 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.000860 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:58Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.018755 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.030597 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.042848 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.066478 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.074630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.080982 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.092394 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.108252 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.122501 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.136517 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.149532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.169452 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.177106 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.184164 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.195315 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:53:59Z is after 2025-08-24T17:21:41Z" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278861 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278977 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.278987 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381207 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.381276 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.484864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.587240 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689822 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.689838 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.792418 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.884582 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.884698 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.884755 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:07.884741072 +0000 UTC m=+49.180846295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.895866 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.914536 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:48:25.375823979 +0000 UTC Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942306 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942367 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.942399 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942498 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942593 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:53:59 crc kubenswrapper[4675]: E0124 06:53:59.942794 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.998938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:53:59 crc kubenswrapper[4675]: I0124 06:53:59.999094 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:53:59Z","lastTransitionTime":"2026-01-24T06:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.101969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.101998 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.102028 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.204741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.205248 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307978 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.307995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.308020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.308038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.410651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411201 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.411529 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.514650 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.617874 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.618011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.618174 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.725995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.726008 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829072 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.829179 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.915282 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:41:07.148235642 +0000 UTC Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:00 crc kubenswrapper[4675]: I0124 06:54:00.932223 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:00Z","lastTransitionTime":"2026-01-24T06:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.012310 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.028017 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032851 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.032885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.046741 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.050375 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.063556 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067741 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067762 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.067785 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.081926 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.087441 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.101919 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:01Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.102030 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103900 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.103925 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424747 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.424830 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.531857 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634341 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.634365 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.736814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839289 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.839324 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.916339 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:28:11.535786337 +0000 UTC Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941489 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941554 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.941651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.941878 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:01Z","lastTransitionTime":"2026-01-24T06:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:01 crc kubenswrapper[4675]: I0124 06:54:01.942066 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942210 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:01 crc kubenswrapper[4675]: E0124 06:54:01.942434 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.044085 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146349 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.146371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.248841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.350826 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.453978 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556732 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.556814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.658911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.659281 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.761814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864835 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.864842 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.917443 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:38:11.184778358 +0000 UTC Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:02 crc kubenswrapper[4675]: I0124 06:54:02.967563 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:02Z","lastTransitionTime":"2026-01-24T06:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.069968 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.070067 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.172518 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.274786 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.377428 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.479991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480054 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.480114 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582631 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.582664 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685434 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.685532 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.787688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.889994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.890471 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.918399 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:05:24.912853878 +0000 UTC Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942207 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942362 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942445 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.942584 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:03 crc kubenswrapper[4675]: E0124 06:54:03.942792 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:03 crc kubenswrapper[4675]: I0124 06:54:03.993580 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:03Z","lastTransitionTime":"2026-01-24T06:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096120 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.096142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.199336 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.302281 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.404989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.405012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507835 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.507989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.610485 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.713512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.816344 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.918536 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:38:01.37459334 +0000 UTC Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919372 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.919418 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:04Z","lastTransitionTime":"2026-01-24T06:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:04 crc kubenswrapper[4675]: I0124 06:54:04.942604 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022691 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022766 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022783 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.022828 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.125639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.125969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.126369 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232955 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.232990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.233043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335890 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335912 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.335922 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.438402 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.442056 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.445432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.445547 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.469003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.492627 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.512914 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.532011 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.540512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.545956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.562172 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.575997 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.591363 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.606131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.616097 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.626893 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.637368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643411 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.643473 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.653151 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.664100 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.676674 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.689474 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.711661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.745688 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848567 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.848782 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.919947 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:41:19.790179828 +0000 UTC Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942406 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.942691 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.943051 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.942476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.943259 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:05 crc kubenswrapper[4675]: E0124 06:54:05.942867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950867 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:05 crc kubenswrapper[4675]: I0124 06:54:05.950957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:05Z","lastTransitionTime":"2026-01-24T06:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052860 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.052939 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155764 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.155782 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258380 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258421 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.258454 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.362371 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.452807 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.453783 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/1.log" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459077 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" exitCode=1 Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.459211 4675 scope.go:117] "RemoveContainer" containerID="8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.460666 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:06 crc kubenswrapper[4675]: E0124 06:54:06.461032 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.466946 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.467545 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.468284 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.488623 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.507864 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.528951 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.554781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571905 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.571933 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.582015 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.589854 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.598115 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.614235 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.624143 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.634893 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.647964 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.663480 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674180 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.674211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.675671 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.684379 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.700585 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f50ad5013d4da9894778bfa51871423665e3ef0dfcfcc1912810961a9347569\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:53:50Z\\\",\\\"message\\\":\\\"ns/factory.go:140\\\\nI0124 06:53:49.392318 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0124 06:53:49.393234 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393350 5972 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0124 06:53:49.393631 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0124 06:53:49.393644 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0124 06:53:49.393682 5972 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0124 06:53:49.393690 5972 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0124 06:53:49.393727 5972 factory.go:656] Stopping watch factory\\\\nI0124 06:53:49.393744 5972 handler.go:208] Removed *v1.Node event handler 7\\\\nI0124 06:53:49.393948 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0124 06:53:49.393958 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0124 06:53:49.393963 5972 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.710598 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.722179 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:06Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776568 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.776606 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880038 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.880080 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.920681 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:03:51.51263916 +0000 UTC Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982405 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982453 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:06 crc kubenswrapper[4675]: I0124 06:54:06.982521 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:06Z","lastTransitionTime":"2026-01-24T06:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085421 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085478 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.085504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188520 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.188590 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292245 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.292275 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395419 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.395450 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.463037 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497659 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.497668 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601236 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.601261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.705219 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.807987 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.887573 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.887826 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.887947 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:23.887910569 +0000 UTC m=+65.184015832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910636 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.910675 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:07Z","lastTransitionTime":"2026-01-24T06:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.920868 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:22:17.366114499 +0000 UTC Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942313 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942421 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:07 crc kubenswrapper[4675]: I0124 06:54:07.942393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942562 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942863 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:07 crc kubenswrapper[4675]: E0124 06:54:07.942963 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.013548 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.068929 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.069761 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:08 crc kubenswrapper[4675]: E0124 06:54:08.069976 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.082500 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.098242 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.107437 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.116377 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.128532 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.141664 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.153916 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.164191 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.179591 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.195653 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.207161 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218542 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218948 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.218970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.232388 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.253332 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.274087 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.291013 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.308769 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321383 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.321426 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.328385 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424842 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424854 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.424888 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.527406 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630193 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.630211 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732635 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.732690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834932 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834978 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.834989 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.921205 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:00:33.53604663 +0000 UTC Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.928785 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.938244 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.941308 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944805 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.944837 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:08Z","lastTransitionTime":"2026-01-24T06:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.958091 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.966313 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.984380 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:08 crc kubenswrapper[4675]: I0124 06:54:08.996116 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:08Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.006892 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.026095 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.036418 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.045578 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.047259 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.058042 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.070400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.084955 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.095853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.108110 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.125882 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.139913 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.148975 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.149038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.156400 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.166938 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.180439 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.191642 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.202907 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.216288 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.227967 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.238002 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.250369 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252399 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.252487 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.263676 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.294935 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.306549 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.317523 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.329887 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.340567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358824 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.358855 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.359375 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.369491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.390155 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.402309 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:09Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.461546 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.564466 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668102 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.668346 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.770996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771068 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.771133 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804655 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804820 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.804856 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.804932 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.804969 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.804935322 +0000 UTC m=+83.101040575 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805018 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805003794 +0000 UTC m=+83.101109117 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805045 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805114 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805130 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805847 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805142 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805116576 +0000 UTC m=+83.101221839 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805138 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805929 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.806006 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.805982848 +0000 UTC m=+83.102088121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.805886 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.806115 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:54:41.806094091 +0000 UTC m=+83.102199314 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873119 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.873145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.921853 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:46:11.814845102 +0000 UTC Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942524 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942571 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.942619 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942609 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942859 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:09 crc kubenswrapper[4675]: E0124 06:54:09.942885 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975281 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:09 crc kubenswrapper[4675]: I0124 06:54:09.975295 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:09Z","lastTransitionTime":"2026-01-24T06:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.077203 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179378 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.179409 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.282979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283011 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.283041 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.385930 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386147 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.386191 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488662 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.488690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.590885 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.693635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.796640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.899822 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:10Z","lastTransitionTime":"2026-01-24T06:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:10 crc kubenswrapper[4675]: I0124 06:54:10.922531 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:57:07.596258433 +0000 UTC Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.001815 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103681 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.103789 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.206924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.206991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.207119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310231 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310254 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.310272 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.412801 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.471569 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.495467 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.499407 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.513827 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.518985 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.519002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.519013 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.533190 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.537953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538067 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.538105 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.554091 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558216 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.558239 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.568861 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:11Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.568990 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.570373 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672651 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.672835 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775700 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775849 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.775871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878306 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878315 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.878339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.922773 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:19:07.787339241 +0000 UTC Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941535 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941614 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941626 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.941563 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941737 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.941918 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:11 crc kubenswrapper[4675]: E0124 06:54:11.942015 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:11 crc kubenswrapper[4675]: I0124 06:54:11.980670 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:11Z","lastTransitionTime":"2026-01-24T06:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082763 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082812 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.082839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.185228 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.288195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.390633 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492669 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.492745 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.595190 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698141 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.698182 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804618 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804644 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.804710 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907431 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907462 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.907477 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:12Z","lastTransitionTime":"2026-01-24T06:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:12 crc kubenswrapper[4675]: I0124 06:54:12.923036 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:50:37.021285994 +0000 UTC Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010173 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.010215 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112819 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.112832 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.215659 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318298 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318358 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.318370 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.420973 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.421060 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524655 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.524759 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.628497 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.731195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.834145 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.923469 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:04:20.891841778 +0000 UTC Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.936561 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:13Z","lastTransitionTime":"2026-01-24T06:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941746 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941747 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941804 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:13 crc kubenswrapper[4675]: I0124 06:54:13.941848 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.941971 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942050 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942100 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:13 crc kubenswrapper[4675]: E0124 06:54:13.942167 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.039578 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143793 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.143809 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247579 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.247673 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350654 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.350690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453290 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453302 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.453310 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.555997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556086 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.556134 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.658946 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763097 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.763138 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866440 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866627 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.866669 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.924336 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:02:06.318468598 +0000 UTC Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969397 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969425 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:14 crc kubenswrapper[4675]: I0124 06:54:14.969438 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:14Z","lastTransitionTime":"2026-01-24T06:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.071991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.072004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.072012 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174676 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174691 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.174700 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.277523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.381608 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484555 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.484566 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587208 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.587284 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.689635 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792159 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792170 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.792199 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894616 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.894708 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.924968 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:17:47.274777885 +0000 UTC Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942376 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942566 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.942616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942680 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:15 crc kubenswrapper[4675]: E0124 06:54:15.942831 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:15 crc kubenswrapper[4675]: I0124 06:54:15.997398 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:15Z","lastTransitionTime":"2026-01-24T06:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.100514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.202566 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305511 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.305530 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407377 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.407416 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510444 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.510478 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612689 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612775 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.612798 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.715961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716153 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.716178 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819909 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.819969 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923562 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.923571 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:16Z","lastTransitionTime":"2026-01-24T06:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:16 crc kubenswrapper[4675]: I0124 06:54:16.925657 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:40:07.10918168 +0000 UTC Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026214 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026248 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.026258 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129311 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129359 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.129386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232032 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.232130 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335167 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335176 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.335202 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437768 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437780 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437796 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.437810 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.540834 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.648986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.649002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.649015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.751896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855175 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.855214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.926343 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:29:07.319592011 +0000 UTC Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941648 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.941800 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.941935 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.941996 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.942049 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.942205 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:17 crc kubenswrapper[4675]: E0124 06:54:17.942332 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958071 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958121 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:17 crc kubenswrapper[4675]: I0124 06:54:17.958141 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:17Z","lastTransitionTime":"2026-01-24T06:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061256 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061285 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.061297 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165332 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.165355 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268886 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.268987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.269020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.269043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.371878 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.475269 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577362 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.577404 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681222 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.681277 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784505 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.784519 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887347 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.887433 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.926581 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:31:17.822097179 +0000 UTC Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.957076 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.980117 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.990970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991047 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991066 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.991077 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:18Z","lastTransitionTime":"2026-01-24T06:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:18 crc kubenswrapper[4675]: I0124 06:54:18.992430 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:18Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.006355 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.020312 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.034823 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.047408 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.059287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.071410 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.086049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.093361 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.100768 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.111368 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.128287 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.137975 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.153477 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.162504 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.172667 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.182033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:19Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.195450 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.298119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401604 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.401613 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.505382 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.505413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.506128 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.609551 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712664 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712679 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.712711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.815963 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917574 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.917612 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:19Z","lastTransitionTime":"2026-01-24T06:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.926932 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:57:06.654012238 +0000 UTC Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942334 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.942475 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942645 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.942758 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942954 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:19 crc kubenswrapper[4675]: I0124 06:54:19.942994 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.943174 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:19 crc kubenswrapper[4675]: E0124 06:54:19.943015 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020090 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020166 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.020189 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122334 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.122397 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224857 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.224874 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.327728 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429739 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429809 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.429839 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.532584 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634799 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634844 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.634869 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.737417 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840035 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840082 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.840091 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.927628 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:03:27.089314885 +0000 UTC Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.942423 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:20 crc kubenswrapper[4675]: E0124 06:54:20.942619 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944750 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944827 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:20 crc kubenswrapper[4675]: I0124 06:54:20.944836 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:20Z","lastTransitionTime":"2026-01-24T06:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.047913 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.150601 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254528 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.254544 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.356986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.357059 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.459621 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566756 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.566810 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668884 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.668975 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.771993 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.772015 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807196 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807270 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.807289 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.824617 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.829942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.830044 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.844022 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847882 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847939 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.847951 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.861170 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865688 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865779 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.865795 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.888301 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891862 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891910 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891933 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.891942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.905607 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:21Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.905744 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907169 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.907195 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:21Z","lastTransitionTime":"2026-01-24T06:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.928859 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:27:38.489862724 +0000 UTC Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942453 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942489 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942481 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:21 crc kubenswrapper[4675]: I0124 06:54:21.942468 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942588 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942768 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942803 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:21 crc kubenswrapper[4675]: E0124 06:54:21.942877 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.009969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010015 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.010037 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112412 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.112519 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214408 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.214472 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.317337 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.420942 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.420990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421019 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.421031 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.523343 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.625952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.625996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.626028 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729638 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.729762 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832274 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832320 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832349 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.832362 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.929029 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:20:27.946098278 +0000 UTC Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.934996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935093 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:22 crc kubenswrapper[4675]: I0124 06:54:22.935122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:22Z","lastTransitionTime":"2026-01-24T06:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036811 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036847 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.036915 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139690 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139737 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.139752 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.243126 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.345984 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.346097 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447844 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447856 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.447882 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549972 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.549983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.550002 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.550013 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.652665 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755156 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.755191 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858191 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.858222 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.929649 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:58:25.903883346 +0000 UTC Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942080 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942214 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942327 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942434 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.942477 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.942529 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.958309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.958439 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:23 crc kubenswrapper[4675]: E0124 06:54:23.958492 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:54:55.95847452 +0000 UTC m=+97.254579743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960979 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:23 crc kubenswrapper[4675]: I0124 06:54:23.960991 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:23Z","lastTransitionTime":"2026-01-24T06:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063225 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063253 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.063294 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.165556 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.267987 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268040 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.268119 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371250 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.371278 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474126 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.474170 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576548 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.576625 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679424 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.679540 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782052 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782065 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782084 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.782095 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884921 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.884990 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.885002 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.930101 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:10:18.511010389 +0000 UTC Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.952131 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987374 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:24 crc kubenswrapper[4675]: I0124 06:54:24.987386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:24Z","lastTransitionTime":"2026-01-24T06:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089639 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089734 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.089744 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.191735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.191992 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192137 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.192203 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294200 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.294677 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.397742 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500470 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.500514 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523330 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523415 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" exitCode=1 Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.523590 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.524043 4675 scope.go:117] "RemoveContainer" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.544040 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.555807 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.566915 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.588586 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.602967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.603011 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.618487 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.635236 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.645276 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.657208 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.667895 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.679253 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.688839 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.702175 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706416 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706452 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.706468 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.715604 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.727848 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.756327 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.772571 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.787141 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.797414 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:25Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809758 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.809768 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912258 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912266 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.912288 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:25Z","lastTransitionTime":"2026-01-24T06:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.930393 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:39:31.368348704 +0000 UTC Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941684 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941732 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941775 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.941777 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:25 crc kubenswrapper[4675]: I0124 06:54:25.941831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.941971 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.942094 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:25 crc kubenswrapper[4675]: E0124 06:54:25.942189 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015130 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.015153 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117535 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.117605 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219706 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.219827 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.322339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424629 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424675 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424699 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.424711 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526179 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526187 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.526214 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.527405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.527453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.537458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.546317 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.554049 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.571575 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.581358 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.591080 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.605083 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.616429 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.626913 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628305 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628329 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.628339 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.638264 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.648583 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.660079 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.670301 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.682094 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.694581 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.713676 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.725634 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730646 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730672 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.730684 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.735946 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.744781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:26Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.832890 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.930868 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:06:52.218392328 +0000 UTC Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935454 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:26 crc kubenswrapper[4675]: I0124 06:54:26.935494 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:26Z","lastTransitionTime":"2026-01-24T06:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038524 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038539 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.038550 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.140838 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243128 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.243139 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345048 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345059 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.345068 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446770 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446790 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.446798 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548709 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.548740 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651482 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.651550 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754165 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754182 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754205 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.754222 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856920 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.856948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.931140 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:07:15.711185069 +0000 UTC Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942497 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942551 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.942688 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942663 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942755 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942827 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:27 crc kubenswrapper[4675]: E0124 06:54:27.942920 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:27 crc kubenswrapper[4675]: I0124 06:54:27.959816 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:27Z","lastTransitionTime":"2026-01-24T06:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062234 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.062304 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164223 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164297 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.164307 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266493 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266506 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266523 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.266533 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368873 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368927 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.368948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471546 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.471622 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574645 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574677 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.574690 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676936 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.676970 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779468 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779522 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779583 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.779599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882186 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882247 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882265 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.882302 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.932129 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:18:13.6025108 +0000 UTC Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.956286 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.968248 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.978458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984585 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.984617 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:28Z","lastTransitionTime":"2026-01-24T06:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:28 crc kubenswrapper[4675]: I0124 06:54:28.988273 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.000567 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:28Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.009661 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.020395 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.030163 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.050595 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.062161 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.078075 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088540 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088588 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.088610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.096555 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.108231 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.116664 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.128200 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.137598 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.153001 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.164362 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.175818 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:29Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.190653 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293434 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.293445 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.395828 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.497994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498043 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.498052 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600395 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600630 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600641 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.600669 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704343 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.704437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806633 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.806660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910698 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910808 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.910833 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:29Z","lastTransitionTime":"2026-01-24T06:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.932627 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:04:42.135659149 +0000 UTC Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.941975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942277 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942345 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942540 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:29 crc kubenswrapper[4675]: I0124 06:54:29.942668 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:29 crc kubenswrapper[4675]: E0124 06:54:29.942756 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013674 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.013686 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115561 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.115623 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.217950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.218050 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320356 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.320398 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422683 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422735 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422765 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.422776 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525143 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.525176 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.627399 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731244 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731264 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731288 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.731305 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835388 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835422 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.835436 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.933193 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:11:49.896503751 +0000 UTC Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937607 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:30 crc kubenswrapper[4675]: I0124 06:54:30.937634 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:30Z","lastTransitionTime":"2026-01-24T06:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040183 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.040192 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142864 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142904 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142953 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.142964 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245640 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245693 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245708 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.245732 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348632 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348693 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348743 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.348755 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450684 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450814 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.450829 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553204 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553238 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.553306 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655742 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655797 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.655843 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758301 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.758405 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861530 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861552 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.861570 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.934241 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:17:27.962671975 +0000 UTC Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941868 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.941870 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.941988 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.942067 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942130 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942091 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:31 crc kubenswrapper[4675]: E0124 06:54:31.942316 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963767 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:31 crc kubenswrapper[4675]: I0124 06:54:31.963852 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:31Z","lastTransitionTime":"2026-01-24T06:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.065950 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.065994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066005 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.066031 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168085 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168101 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.168115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188055 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188116 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.188233 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.208491 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212869 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212961 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.212981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.213006 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.213074 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.232763 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237426 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237480 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.237506 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.259676 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264449 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264476 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.264486 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.280417 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284877 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284956 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.284981 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.285013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.285036 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.299328 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:32Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:32 crc kubenswrapper[4675]: E0124 06:54:32.299472 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301195 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301240 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.301257 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404558 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404602 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.404623 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507339 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.507381 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609958 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609986 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.609999 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712398 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712500 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.712511 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814680 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814738 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814754 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814769 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.814779 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.917351 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:32Z","lastTransitionTime":"2026-01-24T06:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:32 crc kubenswrapper[4675]: I0124 06:54:32.934830 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:23:19.637342851 +0000 UTC Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019576 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019612 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.019674 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122510 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122603 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.122682 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225163 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225174 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.225205 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328111 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328177 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328212 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.328227 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431217 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431249 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.431261 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534682 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534784 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534828 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.534846 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637509 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637792 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.637823 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740181 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740198 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.740244 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842525 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842660 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842823 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.842961 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.935686 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:49:43.937959265 +0000 UTC Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.941952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.942083 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.942004 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.941980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942368 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942538 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:33 crc kubenswrapper[4675]: E0124 06:54:33.942846 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946202 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946241 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946252 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946268 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:33 crc kubenswrapper[4675]: I0124 06:54:33.946282 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:33Z","lastTransitionTime":"2026-01-24T06:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.048996 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049061 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049076 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.049089 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151785 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151843 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151859 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.151871 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265479 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.265574 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368351 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368391 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.368411 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471465 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471553 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471572 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.471583 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574685 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574702 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574759 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.574778 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678127 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678145 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.678187 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781016 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781096 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781117 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.781155 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883385 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883396 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883410 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.883421 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.936225 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:53:20.95012673 +0000 UTC Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.942376 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986113 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:34 crc kubenswrapper[4675]: I0124 06:54:34.986122 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:34Z","lastTransitionTime":"2026-01-24T06:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089182 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089218 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.089266 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.192642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295392 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295623 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.295808 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.398935 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399049 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399070 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399094 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.399112 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.510140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511913 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.511985 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.561786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.567321 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.568250 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.588638 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.606369 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615003 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.615106 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.632888 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.645769 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.660069 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.673299 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.686251 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.701605 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.714271 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717757 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717818 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.717826 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.733232 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.743853 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.756233 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.772045 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.791974 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.805781 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.820962 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821028 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.821053 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.822654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.839003 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.865697 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.875317 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:35Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923492 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923533 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923584 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.923599 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:35Z","lastTransitionTime":"2026-01-24T06:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.936831 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:43:54.563550952 +0000 UTC Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942087 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942103 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:35 crc kubenswrapper[4675]: I0124 06:54:35.942146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942202 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942304 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942412 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:35 crc kubenswrapper[4675]: E0124 06:54:35.942445 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025671 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025711 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025740 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.025749 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128036 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128104 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.128115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.229994 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230034 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230044 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.230070 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333439 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333529 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.333581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.436681 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538457 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538519 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.538610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.572481 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.573348 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/2.log" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576712 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" exitCode=1 Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576790 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.576828 4675 scope.go:117] "RemoveContainer" containerID="0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.578067 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:36 crc kubenswrapper[4675]: E0124 06:54:36.578406 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.593962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.606395 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.620418 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.635962 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640337 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.640350 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.655407 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.667294 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.676703 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.702249 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0951c0a26591a879229b0a7cc3079a7b74d153bff447ddb368196fd713e8d662\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:06Z\\\",\\\"message\\\":\\\"-gdk6g\\\\nI0124 06:54:05.948905 6193 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0124 06:54:05.948912 6193 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0124 06:54:05.948947 6193 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:05Z is after 2025-08-24T17:21:41Z]\\\\nI0124 06:54:05.948962 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0124 06:54:05.948885 6193 obj_retry.go:303] Retry object setup: *v1.Pod openshift-im\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.714672 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.727759 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743131 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743211 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743255 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.743279 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.749204 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.767544 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.789192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.806033 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.821819 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.831491 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.839924 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845367 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845420 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845450 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.845462 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.851838 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.861986 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:36Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.937234 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:54:56.457892262 +0000 UTC Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947355 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947370 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:36 crc kubenswrapper[4675]: I0124 06:54:36.947379 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:36Z","lastTransitionTime":"2026-01-24T06:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049941 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.049964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.050001 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153605 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153666 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.153701 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255652 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255687 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255712 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.255764 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358051 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358100 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.358109 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460531 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.460603 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563855 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563963 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.563997 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.564043 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.583609 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.589260 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.589648 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.610108 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.626131 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.652558 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666441 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666481 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.666528 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.668520 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.680687 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.697611 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.717473 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.736972 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.748956 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.762102 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768473 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768485 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.768523 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.780857 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.791253 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.803192 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.821865 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.833188 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.844654 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.860476 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.870940 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.873857 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.888101 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:37Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.938146 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:18:18.537932568 +0000 UTC Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.941514 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.941783 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.941952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942125 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.942341 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.942380 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942442 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:37 crc kubenswrapper[4675]: E0124 06:54:37.942622 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977667 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.977876 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.978466 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:37 crc kubenswrapper[4675]: I0124 06:54:37.978481 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:37Z","lastTransitionTime":"2026-01-24T06:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081239 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081309 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.081348 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.184625 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287601 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287653 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287670 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287692 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.287708 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390294 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390304 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.390327 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.494989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495022 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495042 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.495051 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597498 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597701 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597846 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.597929 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.598004 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700194 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700494 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700778 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.700914 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803745 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803832 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.803848 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907308 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907342 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.907381 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:38Z","lastTransitionTime":"2026-01-24T06:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.938701 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:47:42.186756429 +0000 UTC Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.956148 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.968349 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.979745 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:38 crc kubenswrapper[4675]: I0124 06:54:38.989996 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f6276faf6bab8c26c2a5904988632fb4682a062c097d6db9892f55f79cd8344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx56z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nqn5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:38Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.002570 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-797q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562cfea2-dd3d-4729-8577-10f3a20ee031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719bd949dcebb43ffe5b06efc471def432d11360be3a3ec5afde7d915b2b4522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d414b426cfbceba37c25be3c5eb1f972ccbb24363515a50d0b2627705422353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://950d7878598376247d6bf0b4c509c1ef43479f904d676c698aff392280022827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3935741682afe3a5bfa6df2f658a472cd94b2d42c00a90a8e5f9e4eb68e114a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3161e5c06b21b899df092d55a335e5fb9909525c307278c0ebb3278c3319345\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fed74bd609fc1015f9aef5e28410d9c9a563cbd640a1b7f04dd3ddab6883eb86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae637b19937b6f46c4ba2c2c7b51e5026b7766d52d4d0b16fc449bcf166991bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvsws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-797q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012771 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012807 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012816 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.012840 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.015680 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rtdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac8e7205-a99a-4174-bd7c-5ddaa11f9916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f177c15aedfc2ef606350c6b8a02b6ef93892208c60f2e3f260cf99d6967930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hhd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rtdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.031813 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtbqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8mdgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.046325 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de53f3d-828e-4acb-8055-03329b250d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6fd26bfd86e497d84d9267d00d273bedbb9387c3fa8c0e37836972f12532b00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c53eb0c39ee57069fc961f21d82dd73fbadcf8331433852f3230039a40feb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9325197c820ab5701505c757501a8a978dd2065fd360194c4ef67aeaf15e63f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02bbdfb7911e8c42fe3c24e6b1cabfb90e31d4a6fc829b65527e02fb6ee556a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.076397 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"290a1692-f149-4550-8ff1-28c3f16fb059\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f27c2077f3b92f915558f1dd5d2ef293716e2da4a598167e68628f06e44e1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff094fb7b43206040cb65c33b883b275b891df634a6edce11088234879281f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a0b1ba1e69efd30e48bd685c48af8d6017b9466ae77ab32bfc0f18b71b50084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b36019814dcd9d5931e8512cd6defcc05dcf91e4004f17d46307e37e29bb68c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03c731c4360c12be6ba5aae1d6e5f7342e022c85ad73edc690c6bf26142cbcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bc6555e0f06dd7a185512de377c626cb53542bf777e2bfc291a36ef014cd398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676c29bf54f7df780bd22b26c270745be85804be7114739b20053038a6a46863\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01711c6e095999c2c3b1087bfb71280eccc435e8d11aa78964bafefaa00e7cd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.092580 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed99261e276fce2ebe87b13c8648b5b292da659055ded9384626df76c2d4260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.105458 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb2b13a6bed299f0cbfebc982adbe129053ae79d7f8fb335dcb79cc1e0be6c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49e51410de50326e0a1f89d5d3f881d2072ced4f1f029fa56c4140be6bdb0cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115817 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.115885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.116007 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.116127 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.117788 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zx9ns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61e129ca-c9dc-4375-b373-5eec702744bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:24Z\\\",\\\"message\\\":\\\"2026-01-24T06:53:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57\\\\n2026-01-24T06:53:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f65eb654-413d-4f51-8dd8-7ec83acf2a57 to /host/opt/cni/bin/\\\\n2026-01-24T06:53:39Z [verbose] multus-daemon started\\\\n2026-01-24T06:53:39Z [verbose] Readiness Indicator file check\\\\n2026-01-24T06:54:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2gx8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zx9ns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.139555 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44ba90f6-bfee-4e2b-8f89-c43235412e6c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-24T06:53:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0124 06:53:31.293642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0124 06:53:31.294568 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3582653898/tls.crt::/tmp/serving-cert-3582653898/tls.key\\\\\\\"\\\\nI0124 06:53:36.902052 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0124 06:53:36.906519 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0124 06:53:36.906543 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0124 06:53:36.906565 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0124 06:53:36.906570 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0124 06:53:36.911668 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0124 06:53:36.911845 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911899 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0124 06:53:36.911942 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0124 06:53:36.911693 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0124 06:53:36.911980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0124 06:53:36.912066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0124 06:53:36.912096 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0124 06:53:36.913678 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.151675 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77a63dc3-49e5-4ae1-a5bb-f9087674a6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://023dec0f6bef7bb757f548796e18a9e5c0b67d47eb79e9a00225523dfde20801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1140ac6ea4dfe511ca613b3c33a0c92ad8e06253034ec572a3b0a105bb14bbcf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.168922 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99d87587-49af-45f5-bb92-570bcdd28a78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a288effeb6e64c80957a009be5059c1da3ddaf86ac145d7412dfab4161e5add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59a47c963c66b6aab9ec4529b937f34cb0a1d0519599c8c2b4a7845072739d30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://424dc5a0272f8c600e17b08f28c646b4cc27473730f9237ad8700471d3436b5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.183394 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zbs9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"581bfd98-ba0e-4e17-812b-088da051ba3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2541e4126ef95728b4c327d11e8b61c817c6b77ca029f79a10749ed46dec3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxvpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zbs9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.206487 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50a4333f-fd95-41a0-9ac8-4c21f9000870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-24T06:54:36Z\\\",\\\"message\\\":\\\"ind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"UDP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nF0124 06:54:36.196176 6594 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-24T06:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-24T06:53:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-24T06:53:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qgbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vsnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218436 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218464 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218488 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.218504 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.221289 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d143943f-5bfe-4381-b997-c99ce1ccf80b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04527232f5a0133cc347af91c86df1bdf01dcc227e7255551ec80fd160fb83ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ca7da316422e0625f86e3ea664b7d722bcc0f90d1865c23b746b5011418fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84ltl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-24T06:53:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-42gs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.231583 4675 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-24T06:53:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d0ca96788ee318e983b166ac1f631600cab6341851736e4a3bc8e65c813c4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-24T06:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:39Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322554 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322647 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.322769 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425020 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425350 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425455 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.425611 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528626 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528648 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.528656 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631544 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631621 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.631630 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734029 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734073 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.734115 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.836841 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.938913 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:40:38.027009065 +0000 UTC Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939489 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.939512 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:39Z","lastTransitionTime":"2026-01-24T06:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941944 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941962 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.941968 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942041 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:39 crc kubenswrapper[4675]: I0124 06:54:39.942164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942234 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942352 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:39 crc kubenswrapper[4675]: E0124 06:54:39.942481 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042033 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042041 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.042068 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.144976 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145009 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145031 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.145038 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.247576 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.350386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.452983 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.453077 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555538 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555617 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.555660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658313 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658364 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658404 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.658422 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761307 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761432 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.761456 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.863964 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864354 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864649 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.864814 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.939460 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:16:24.183373751 +0000 UTC Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.968508 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.968880 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969402 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:40 crc kubenswrapper[4675]: I0124 06:54:40.969624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:40Z","lastTransitionTime":"2026-01-24T06:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072923 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.072977 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176389 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.176440 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280543 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280615 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280634 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.280676 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384132 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384160 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.384180 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487269 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487331 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487348 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487373 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.487388 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590782 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590858 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590888 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590911 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.590928 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694379 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694456 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694472 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694497 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.694515 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.797926 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.797995 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798045 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.798066 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849185 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849482 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849441123 +0000 UTC m=+147.145546386 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.849548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849604 4675 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849641 4675 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849655 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849854 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849884 4675 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849709 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849676999 +0000 UTC m=+147.145782302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849941 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849986 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.849959735 +0000 UTC m=+147.146065068 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.849991 4675 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850027 4675 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850032 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.850014707 +0000 UTC m=+147.146120120 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.850123 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.850097549 +0000 UTC m=+147.146202822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904490 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904517 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.904527 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:41Z","lastTransitionTime":"2026-01-24T06:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.940401 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:03:34.298918683 +0000 UTC Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.941877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:41 crc kubenswrapper[4675]: I0124 06:54:41.942117 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942056 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942358 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:41 crc kubenswrapper[4675]: E0124 06:54:41.942461 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008139 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008190 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008213 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008233 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.008246 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110656 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110697 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110710 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110753 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.110769 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213095 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213164 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213185 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.213199 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316831 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316871 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316895 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.316910 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419149 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419226 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419257 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.419348 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522235 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522261 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522292 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.522316 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614695 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614773 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614789 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614806 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.614819 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.633774 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638267 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638413 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638600 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.638693 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.657352 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662384 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662557 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662650 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662774 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.662875 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.682174 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686581 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686599 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.686636 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.700428 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705949 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.705971 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.706000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.706020 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.725144 4675 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-24T06:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"79a1b90b-9d8a-4b28-bda7-61ba2f3990af\\\",\\\"systemUUID\\\":\\\"162c3bb2-7c82-48b0-b2c6-851c52c6f34e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-24T06:54:42Z is after 2025-08-24T17:21:41Z" Jan 24 06:54:42 crc kubenswrapper[4675]: E0124 06:54:42.725528 4675 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728110 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728161 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.728181 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830810 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830916 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830938 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830969 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.830994 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.932970 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933013 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933023 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933038 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.933048 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:42Z","lastTransitionTime":"2026-01-24T06:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:42 crc kubenswrapper[4675]: I0124 06:54:42.941000 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:28:42.960103205 +0000 UTC Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036495 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036532 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.036552 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.138999 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139078 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139103 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.139160 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241837 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241900 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.241957 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345622 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345643 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.345660 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.448937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552575 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552593 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.552642 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.655801 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656291 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.656581 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760008 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760504 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.760862 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864333 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864390 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864430 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.864446 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941841 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.941900 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:54:35.275264497 +0000 UTC Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942042 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.942100 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942334 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:43 crc kubenswrapper[4675]: E0124 06:54:43.942469 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967800 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967850 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967881 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:43 crc kubenswrapper[4675]: I0124 06:54:43.967892 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:43Z","lastTransitionTime":"2026-01-24T06:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071209 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071486 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071663 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.071777 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175105 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175152 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175168 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175189 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.175205 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317025 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317056 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317077 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.317087 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420282 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420340 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420357 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420381 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.420399 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523327 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523418 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523451 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.523473 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626228 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626295 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626338 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.626347 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729474 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729547 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729564 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729590 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.729608 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833526 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833577 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833589 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833610 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.833622 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937353 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937406 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.937448 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:44Z","lastTransitionTime":"2026-01-24T06:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:44 crc kubenswrapper[4675]: I0124 06:54:44.942910 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:47:19.925760638 +0000 UTC Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040064 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040125 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040140 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.040151 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143323 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143417 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.143480 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245277 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245328 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245344 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245366 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.245380 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348088 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348142 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348154 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348171 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.348184 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450898 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.450906 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553442 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553458 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553475 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.553488 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655870 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655934 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.655988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.656011 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759318 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759386 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759407 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.759423 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862487 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862598 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862619 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862642 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.862659 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.941900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.941971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.942480 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.942544 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942610 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942709 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:45 crc kubenswrapper[4675]: E0124 06:54:45.942905 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.943578 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:47:56.940609845 +0000 UTC Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965376 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965781 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.965957 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.966098 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:45 crc kubenswrapper[4675]: I0124 06:54:45.966236 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:45Z","lastTransitionTime":"2026-01-24T06:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069317 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069375 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069393 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.069427 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.172914 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173363 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173569 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.173640 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.276947 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277039 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277062 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.277082 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.380624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483276 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483284 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483299 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.483309 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587075 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587124 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587136 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587151 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.587163 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690507 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690563 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.690624 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799657 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799707 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799755 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799786 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.799805 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903050 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903112 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.903182 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:46Z","lastTransitionTime":"2026-01-24T06:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:46 crc kubenswrapper[4675]: I0124 06:54:46.944299 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:25:30.375443891 +0000 UTC Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006058 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006102 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006129 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.006142 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109018 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109114 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.109183 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211673 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211733 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211761 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.211771 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.313937 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.313989 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314001 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314017 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.314033 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416296 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416314 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.416325 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519278 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519325 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519336 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519352 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.519365 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621030 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621069 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.621100 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723273 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723330 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723346 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723369 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.723386 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826188 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826197 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826210 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.826218 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929224 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929286 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.929309 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:47Z","lastTransitionTime":"2026-01-24T06:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941904 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941932 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.941958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.941993 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.942103 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942098 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942148 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:47 crc kubenswrapper[4675]: E0124 06:54:47.942188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:47 crc kubenswrapper[4675]: I0124 06:54:47.945165 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:04:56.035279544 +0000 UTC Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031878 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031908 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.031920 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134840 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134866 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134874 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134887 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.134896 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237974 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.237991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.238014 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.238032 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341046 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341118 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341135 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341158 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.341175 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444471 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444565 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.444610 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547463 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547879 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.547952 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651447 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651514 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651537 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651566 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.651588 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756099 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756802 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756829 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756848 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.756859 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859435 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859477 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.859499 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.945683 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:15:38.49288908 +0000 UTC Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962845 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962907 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962925 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.962971 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:48Z","lastTransitionTime":"2026-01-24T06:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:48 crc kubenswrapper[4675]: I0124 06:54:48.988039 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.988021363 podStartE2EDuration="1m7.988021363s" podCreationTimestamp="2026-01-24 06:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:48.987566073 +0000 UTC m=+90.283671296" watchObservedRunningTime="2026-01-24 06:54:48.988021363 +0000 UTC m=+90.284126586" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065541 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065571 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065582 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065596 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.065604 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.066748 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.066737362 podStartE2EDuration="1m12.066737362s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.066067616 +0000 UTC m=+90.362172859" watchObservedRunningTime="2026-01-24 06:54:49.066737362 +0000 UTC m=+90.362842585" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.066973 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zx9ns" podStartSLOduration=72.066968917 podStartE2EDuration="1m12.066968917s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.048664455 +0000 UTC m=+90.344769758" watchObservedRunningTime="2026-01-24 06:54:49.066968917 +0000 UTC m=+90.363074140" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.079322 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.079296718 podStartE2EDuration="25.079296718s" podCreationTimestamp="2026-01-24 06:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.079228147 +0000 UTC m=+90.375333370" watchObservedRunningTime="2026-01-24 06:54:49.079296718 +0000 UTC m=+90.375401961" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.096210 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.096191887 podStartE2EDuration="1m12.096191887s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.094094438 +0000 UTC m=+90.390199671" watchObservedRunningTime="2026-01-24 06:54:49.096191887 +0000 UTC m=+90.392297120" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.107050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zbs9f" podStartSLOduration=72.107033514 podStartE2EDuration="1m12.107033514s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.106537142 +0000 UTC m=+90.402642375" watchObservedRunningTime="2026-01-24 06:54:49.107033514 +0000 UTC m=+90.403138737" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.155192 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-42gs8" podStartSLOduration=71.1551738 podStartE2EDuration="1m11.1551738s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.155113869 +0000 UTC m=+90.451219092" watchObservedRunningTime="2026-01-24 06:54:49.1551738 +0000 UTC m=+90.451279023" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167515 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167587 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.167617 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.248090 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podStartSLOduration=72.248074645 podStartE2EDuration="1m12.248074645s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.22632426 +0000 UTC m=+90.522429503" watchObservedRunningTime="2026-01-24 06:54:49.248074645 +0000 UTC m=+90.544179868" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.248218 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-797q5" podStartSLOduration=72.248213778 podStartE2EDuration="1m12.248213778s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.247690895 +0000 UTC m=+90.543796148" watchObservedRunningTime="2026-01-24 06:54:49.248213778 +0000 UTC m=+90.544319001" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.260558 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7rtdz" podStartSLOduration=72.260531139 podStartE2EDuration="1m12.260531139s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.259669438 +0000 UTC m=+90.555774661" watchObservedRunningTime="2026-01-24 06:54:49.260531139 +0000 UTC m=+90.556636402" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269536 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269580 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269592 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269609 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.269620 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372192 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372534 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372637 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.372864 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475026 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475079 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475091 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475106 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.475117 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577751 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577791 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577804 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577820 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.577832 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680502 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680556 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680573 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680597 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.680614 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783024 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783089 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783108 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.783120 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887433 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887491 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887501 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887521 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.887531 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942084 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.942948 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942471 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.943075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.942462 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.942688 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:49 crc kubenswrapper[4675]: E0124 06:54:49.943180 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.946568 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:04:55.713081449 +0000 UTC Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991060 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991322 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991467 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991613 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:49 crc kubenswrapper[4675]: I0124 06:54:49.991804 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:49Z","lastTransitionTime":"2026-01-24T06:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094178 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094219 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094229 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.094258 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197148 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197483 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197614 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197777 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.197924 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.300794 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301271 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.301543 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.404795 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405157 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405243 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.405321 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507834 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507885 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507901 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507922 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.507937 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.610960 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611300 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611678 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.611918 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.715839 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716037 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716074 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716107 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.716127 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818513 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818578 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818608 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.818620 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920865 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920924 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920940 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.920948 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:50Z","lastTransitionTime":"2026-01-24T06:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:50 crc kubenswrapper[4675]: I0124 06:54:50.946806 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:57:20.304679494 +0000 UTC Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023550 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023594 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023605 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023618 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.023631 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126203 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126246 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126260 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126279 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.126294 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229518 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229591 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229606 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229625 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.229638 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332321 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332368 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332387 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332409 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.332425 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434915 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434943 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434952 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434965 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.434974 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.537988 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538063 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538087 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538115 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.538141 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640496 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640559 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640595 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640624 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.640644 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743461 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743503 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743512 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743527 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.743536 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846133 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846162 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846172 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846184 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.846194 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941602 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941652 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.941767 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.941881 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942016 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942196 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:51 crc kubenswrapper[4675]: E0124 06:54:51.942352 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.947062 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:21:55.341670605 +0000 UTC Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949813 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949868 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949889 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949918 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:51 crc kubenswrapper[4675]: I0124 06:54:51.949942 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:51Z","lastTransitionTime":"2026-01-24T06:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053371 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053428 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053443 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053469 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.053487 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156704 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156803 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156821 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156853 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.156887 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260345 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260403 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260415 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260438 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.260449 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374275 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374414 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374427 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374445 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.374459 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.477815 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478000 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478027 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478057 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.478081 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582010 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582092 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582109 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582134 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.582151 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.684928 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.684991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685004 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685021 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.685037 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787896 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787967 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.787991 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.788007 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.891658 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.891966 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892144 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892303 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.892437 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.943137 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:54:52 crc kubenswrapper[4675]: E0124 06:54:52.943326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.947472 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:47:06.194487128 +0000 UTC Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995788 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995836 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995852 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995875 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:52 crc kubenswrapper[4675]: I0124 06:54:52.995893 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:52Z","lastTransitionTime":"2026-01-24T06:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.066951 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067221 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067326 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067394 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.067469 4675 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T06:54:53Z","lastTransitionTime":"2026-01-24T06:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124101 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.124085298 podStartE2EDuration="45.124085298s" podCreationTimestamp="2026-01-24 06:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:49.284503395 +0000 UTC m=+90.580608618" watchObservedRunningTime="2026-01-24 06:54:53.124085298 +0000 UTC m=+94.420190521" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124476 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs"] Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.124806 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.128041 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.128450 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.129261 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.133705 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.212995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213088 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.213292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314510 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314591 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314659 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.314883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8305da25-aad5-435c-994d-00c2fc75ed74-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.316751 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8305da25-aad5-435c-994d-00c2fc75ed74-service-ca\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.324635 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8305da25-aad5-435c-994d-00c2fc75ed74-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.335571 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8305da25-aad5-435c-994d-00c2fc75ed74-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-p5szs\" (UID: \"8305da25-aad5-435c-994d-00c2fc75ed74\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.440532 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.643169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" event={"ID":"8305da25-aad5-435c-994d-00c2fc75ed74","Type":"ContainerStarted","Data":"3106dd92396404fd20cf5616dd67a83e0bc41d2bf11bc2b2b98acebc7ba1df99"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.643221 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" event={"ID":"8305da25-aad5-435c-994d-00c2fc75ed74","Type":"ContainerStarted","Data":"f54b4a95013ed9b88107852ffb3256e81ce6e9af7fa8b1c770a8cee29e28c9c1"} Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942189 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942232 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942196 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.942186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942351 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942455 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942541 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:53 crc kubenswrapper[4675]: E0124 06:54:53.942626 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.947830 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:08:00.974415207 +0000 UTC Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.948290 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 24 06:54:53 crc kubenswrapper[4675]: I0124 06:54:53.954914 4675 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942315 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942450 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942327 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942312 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942528 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:55 crc kubenswrapper[4675]: I0124 06:54:55.942336 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942649 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:55 crc kubenswrapper[4675]: E0124 06:54:55.942745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:56 crc kubenswrapper[4675]: I0124 06:54:56.041674 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:56 crc kubenswrapper[4675]: E0124 06:54:56.041948 4675 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:56 crc kubenswrapper[4675]: E0124 06:54:56.042059 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs podName:9b6e6bdc-02e8-45ac-b89d-caf409ba451e nodeName:}" failed. No retries permitted until 2026-01-24 06:56:00.042030255 +0000 UTC m=+161.338135518 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs") pod "network-metrics-daemon-8mdgj" (UID: "9b6e6bdc-02e8-45ac-b89d-caf409ba451e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941691 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:57 crc kubenswrapper[4675]: I0124 06:54:57.941824 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.941969 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942102 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942215 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:57 crc kubenswrapper[4675]: E0124 06:54:57.942399 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942554 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.942694 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942895 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.942972 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.942999 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.943083 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:54:59 crc kubenswrapper[4675]: I0124 06:54:59.943097 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:54:59 crc kubenswrapper[4675]: E0124 06:54:59.943341 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.026871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.027078 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.027194 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.027582 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.942049 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.942162 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:01 crc kubenswrapper[4675]: I0124 06:55:01.942053 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:01 crc kubenswrapper[4675]: E0124 06:55:01.942308 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:02 crc kubenswrapper[4675]: I0124 06:55:02.942286 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:02 crc kubenswrapper[4675]: E0124 06:55:02.942534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:02 crc kubenswrapper[4675]: I0124 06:55:02.943000 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:02 crc kubenswrapper[4675]: E0124 06:55:02.943169 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.941492 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.941638 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.941745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.942113 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:03 crc kubenswrapper[4675]: I0124 06:55:03.942460 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:03 crc kubenswrapper[4675]: E0124 06:55:03.942655 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:55:04 crc kubenswrapper[4675]: I0124 06:55:04.941431 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:04 crc kubenswrapper[4675]: E0124 06:55:04.941651 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:04 crc kubenswrapper[4675]: I0124 06:55:04.941709 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:04 crc kubenswrapper[4675]: E0124 06:55:04.941934 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:05 crc kubenswrapper[4675]: I0124 06:55:05.942522 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:05 crc kubenswrapper[4675]: I0124 06:55:05.942528 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:05 crc kubenswrapper[4675]: E0124 06:55:05.943715 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:05 crc kubenswrapper[4675]: E0124 06:55:05.943577 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:06 crc kubenswrapper[4675]: I0124 06:55:06.942173 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:06 crc kubenswrapper[4675]: I0124 06:55:06.942286 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:06 crc kubenswrapper[4675]: E0124 06:55:06.942492 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:06 crc kubenswrapper[4675]: E0124 06:55:06.942770 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:07 crc kubenswrapper[4675]: I0124 06:55:07.942381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:07 crc kubenswrapper[4675]: I0124 06:55:07.942558 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:07 crc kubenswrapper[4675]: E0124 06:55:07.942805 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:07 crc kubenswrapper[4675]: E0124 06:55:07.942867 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:08 crc kubenswrapper[4675]: I0124 06:55:08.942517 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:08 crc kubenswrapper[4675]: I0124 06:55:08.942592 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:08 crc kubenswrapper[4675]: E0124 06:55:08.944716 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:08 crc kubenswrapper[4675]: E0124 06:55:08.944891 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:09 crc kubenswrapper[4675]: I0124 06:55:09.941925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:09 crc kubenswrapper[4675]: I0124 06:55:09.941984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:09 crc kubenswrapper[4675]: E0124 06:55:09.942090 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:09 crc kubenswrapper[4675]: E0124 06:55:09.942289 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:10 crc kubenswrapper[4675]: I0124 06:55:10.941878 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:10 crc kubenswrapper[4675]: I0124 06:55:10.941908 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:10 crc kubenswrapper[4675]: E0124 06:55:10.943084 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:10 crc kubenswrapper[4675]: E0124 06:55:10.943558 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.701645 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702178 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/0.log" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702225 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" exitCode=1 Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe"} Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.702296 4675 scope.go:117] "RemoveContainer" containerID="6319f662fc7d72824b89c63c2d2dbb5f35d545c5967c51f8a0c516b26ecdd53a" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.703010 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.703209 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.728292 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-p5szs" podStartSLOduration=94.728270948 podStartE2EDuration="1m34.728270948s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:54:53.657264279 +0000 UTC m=+94.953369502" watchObservedRunningTime="2026-01-24 06:55:11.728270948 +0000 UTC m=+113.024376181" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.941925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:11 crc kubenswrapper[4675]: I0124 06:55:11.941965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.942069 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:11 crc kubenswrapper[4675]: E0124 06:55:11.942201 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.707092 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.941973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:12 crc kubenswrapper[4675]: I0124 06:55:12.942143 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:12 crc kubenswrapper[4675]: E0124 06:55:12.942592 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:12 crc kubenswrapper[4675]: E0124 06:55:12.942690 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:13 crc kubenswrapper[4675]: I0124 06:55:13.941429 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:13 crc kubenswrapper[4675]: I0124 06:55:13.941429 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:13 crc kubenswrapper[4675]: E0124 06:55:13.941951 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:13 crc kubenswrapper[4675]: E0124 06:55:13.941853 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.942261 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.942504 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.942639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.942783 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:14 crc kubenswrapper[4675]: I0124 06:55:14.943862 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:14 crc kubenswrapper[4675]: E0124 06:55:14.944033 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vsnzs_openshift-ovn-kubernetes(50a4333f-fd95-41a0-9ac8-4c21f9000870)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" Jan 24 06:55:15 crc kubenswrapper[4675]: I0124 06:55:15.942169 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:15 crc kubenswrapper[4675]: I0124 06:55:15.942208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:15 crc kubenswrapper[4675]: E0124 06:55:15.942368 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:15 crc kubenswrapper[4675]: E0124 06:55:15.942456 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:16 crc kubenswrapper[4675]: I0124 06:55:16.942043 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:16 crc kubenswrapper[4675]: I0124 06:55:16.942062 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:16 crc kubenswrapper[4675]: E0124 06:55:16.942254 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:16 crc kubenswrapper[4675]: E0124 06:55:16.942350 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:17 crc kubenswrapper[4675]: I0124 06:55:17.941986 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:17 crc kubenswrapper[4675]: I0124 06:55:17.941986 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:17 crc kubenswrapper[4675]: E0124 06:55:17.942227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:17 crc kubenswrapper[4675]: E0124 06:55:17.942392 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:18 crc kubenswrapper[4675]: I0124 06:55:18.942963 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:18 crc kubenswrapper[4675]: I0124 06:55:18.943147 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.943145 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.943240 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:18 crc kubenswrapper[4675]: E0124 06:55:18.982545 4675 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.029224 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:19 crc kubenswrapper[4675]: I0124 06:55:19.942163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.942374 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:19 crc kubenswrapper[4675]: I0124 06:55:19.942172 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:19 crc kubenswrapper[4675]: E0124 06:55:19.942843 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:20 crc kubenswrapper[4675]: I0124 06:55:20.942119 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:20 crc kubenswrapper[4675]: I0124 06:55:20.942153 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:20 crc kubenswrapper[4675]: E0124 06:55:20.942279 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:20 crc kubenswrapper[4675]: E0124 06:55:20.942394 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:21 crc kubenswrapper[4675]: I0124 06:55:21.942248 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:21 crc kubenswrapper[4675]: I0124 06:55:21.942275 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:21 crc kubenswrapper[4675]: E0124 06:55:21.942493 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:21 crc kubenswrapper[4675]: E0124 06:55:21.942618 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:22 crc kubenswrapper[4675]: I0124 06:55:22.942327 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:22 crc kubenswrapper[4675]: E0124 06:55:22.942581 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:22 crc kubenswrapper[4675]: I0124 06:55:22.943159 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:22 crc kubenswrapper[4675]: E0124 06:55:22.943311 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:23 crc kubenswrapper[4675]: I0124 06:55:23.941857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:23 crc kubenswrapper[4675]: E0124 06:55:23.942022 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:23 crc kubenswrapper[4675]: I0124 06:55:23.941861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:23 crc kubenswrapper[4675]: E0124 06:55:23.942233 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.031116 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:24 crc kubenswrapper[4675]: I0124 06:55:24.941435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:24 crc kubenswrapper[4675]: I0124 06:55:24.941524 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.941565 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:24 crc kubenswrapper[4675]: E0124 06:55:24.941671 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.942568 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.942605 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:25 crc kubenswrapper[4675]: E0124 06:55:25.942748 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:25 crc kubenswrapper[4675]: E0124 06:55:25.942955 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:25 crc kubenswrapper[4675]: I0124 06:55:25.943927 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.760699 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.763273 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerStarted","Data":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.764262 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.772619 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.772758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:26 crc kubenswrapper[4675]: E0124 06:55:26.772864 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.800973 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podStartSLOduration=109.800956345 podStartE2EDuration="1m49.800956345s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:26.800781431 +0000 UTC m=+128.096886654" watchObservedRunningTime="2026-01-24 06:55:26.800956345 +0000 UTC m=+128.097061558" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.944438 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:26 crc kubenswrapper[4675]: E0124 06:55:26.944564 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:26 crc kubenswrapper[4675]: I0124 06:55:26.945363 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.768615 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.768696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b"} Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.942027 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:27 crc kubenswrapper[4675]: I0124 06:55:27.942135 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:27 crc kubenswrapper[4675]: E0124 06:55:27.942271 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:27 crc kubenswrapper[4675]: E0124 06:55:27.942374 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:28 crc kubenswrapper[4675]: I0124 06:55:28.942071 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:28 crc kubenswrapper[4675]: I0124 06:55:28.942090 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:28 crc kubenswrapper[4675]: E0124 06:55:28.943244 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:28 crc kubenswrapper[4675]: E0124 06:55:28.943493 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.032866 4675 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 06:55:29 crc kubenswrapper[4675]: I0124 06:55:29.942295 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:29 crc kubenswrapper[4675]: I0124 06:55:29.942355 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.942450 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:29 crc kubenswrapper[4675]: E0124 06:55:29.942537 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:30 crc kubenswrapper[4675]: I0124 06:55:30.942501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:30 crc kubenswrapper[4675]: I0124 06:55:30.942662 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:30 crc kubenswrapper[4675]: E0124 06:55:30.942782 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:30 crc kubenswrapper[4675]: E0124 06:55:30.942855 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:31 crc kubenswrapper[4675]: I0124 06:55:31.942331 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:31 crc kubenswrapper[4675]: I0124 06:55:31.942340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:31 crc kubenswrapper[4675]: E0124 06:55:31.942534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:31 crc kubenswrapper[4675]: E0124 06:55:31.942649 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:32 crc kubenswrapper[4675]: I0124 06:55:32.941673 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:32 crc kubenswrapper[4675]: I0124 06:55:32.941697 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:32 crc kubenswrapper[4675]: E0124 06:55:32.941975 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8mdgj" podUID="9b6e6bdc-02e8-45ac-b89d-caf409ba451e" Jan 24 06:55:32 crc kubenswrapper[4675]: E0124 06:55:32.942141 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 24 06:55:33 crc kubenswrapper[4675]: I0124 06:55:33.942186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:33 crc kubenswrapper[4675]: I0124 06:55:33.942274 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:33 crc kubenswrapper[4675]: E0124 06:55:33.942468 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 24 06:55:33 crc kubenswrapper[4675]: E0124 06:55:33.942655 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.486620 4675 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.538236 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.538815 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.539594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.541597 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.547342 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.551861 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.551865 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.552615 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.552631 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.554134 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.555319 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.555755 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.556763 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.557356 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.558631 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.559025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.560146 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.560707 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561064 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561129 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561502 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561649 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561761 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.561792 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.562512 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.562660 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.571397 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.572047 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580296 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580589 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580739 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.580883 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.583703 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xwk6j"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.584193 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.589280 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.590492 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.592596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593343 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593367 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593571 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593593 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593702 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593766 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593832 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593871 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593893 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593929 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.593700 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594025 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594056 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594072 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594126 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594168 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594192 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.594030 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597447 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597554 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597688 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597769 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597828 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.597914 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.598238 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.598338 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.599114 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.599125 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.602933 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.605705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.606085 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.606218 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.608747 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.609370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.609595 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.610063 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.611028 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.611371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.616483 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.636871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637204 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637449 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637670 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.637849 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.638061 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640398 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.641360 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.643075 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.643219 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.640399 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.644867 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.645367 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672830 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.672995 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673055 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673597 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673770 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673868 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673929 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.673990 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.675976 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676404 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676792 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.676974 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.678204 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.679311 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.677057 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.677568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.678406 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.680438 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.681088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.681419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.682108 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.683937 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.684438 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.688157 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.688225 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.690025 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691046 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691674 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.691827 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.692144 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.692163 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693499 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693686 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693945 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.693959 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694054 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694143 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694327 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694442 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694514 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.694813 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.695668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.695850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.697163 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.702698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.704403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.709028 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.709465 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.711141 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.716873 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.717377 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.739246 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.740435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742938 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742960 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.742989 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743141 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743253 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743336 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743365 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743417 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743477 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743546 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743667 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743692 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743769 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743807 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743831 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.743970 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744058 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744117 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744297 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744319 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744377 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744395 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744419 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744455 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744477 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744540 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744628 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744660 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744818 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744872 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.744947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.747999 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.748200 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.749102 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.753957 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.754713 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.755059 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.760783 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.761534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.762260 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.762846 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.763361 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.772710 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.773354 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.776825 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.777497 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.777955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.778145 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.778207 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.783731 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.786436 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.786619 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787419 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787781 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.787995 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.789594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.791399 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.791838 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792952 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.792966 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.800927 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.801109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.801624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.802111 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805263 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805274 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.805786 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.807917 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.808382 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.809349 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.811126 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.813388 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.813514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.819077 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.820801 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.821700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823168 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823376 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.823960 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.824979 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.825990 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.827326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.830136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.831510 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.832617 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9zjhs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.833370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.833952 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.835959 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.840688 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.843500 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.844531 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.849074 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853391 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853964 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.853991 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854030 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854122 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854169 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854238 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854272 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854327 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854359 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854415 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854441 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854854 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.854955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855049 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855128 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/737c0ee8-629a-4935-8357-c321e1ff5a41-service-ca-bundle\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.856457 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.856492 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.858431 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.858675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-audit-dir\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.855341 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859335 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859381 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.859458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.860950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.860997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.861830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fa7730-1346-4cd6-bb9a-b93bb377047d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862590 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862785 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862888 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.862968 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863515 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863635 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863670 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863703 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.863872 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865304 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865488 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.867253 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.867813 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.865922 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868539 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868786 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868806 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868858 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868920 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868949 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.868982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869007 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869044 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869068 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869112 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869533 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869549 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869631 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869738 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869774 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869812 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869850 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869874 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869898 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.869982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870116 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870134 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870169 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870188 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870242 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870320 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870383 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870411 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.870520 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.871175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4b956c8c-f12f-4622-b67d-29349ba463aa-node-pullsecrets\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.871232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.872100 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-default-certificate\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.873210 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a446c38f-dc5a-4a87-ba82-3405c0aadae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874562 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874597 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.874914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-config\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.875266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-service-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/927fb92f-2e72-4344-90e4-cfc0357135f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.878531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-serving-cert\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.888806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-audit\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889283 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-encryption-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.889862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891031 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-ca\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891091 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-serving-cert\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891350 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fa7730-1346-4cd6-bb9a-b93bb377047d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-metrics-certs\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.891689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892024 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-metrics-tls\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892210 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892658 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.892276 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c558c558-e8e3-4914-b88e-f5299916978f-serving-cert\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.893825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-image-import-ca\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894113 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/737c0ee8-629a-4935-8357-c321e1ff5a41-stats-auth\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.894998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895053 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895148 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.895405 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da81853-6557-4653-87f8-c2423aeb3994-auth-proxy-config\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.896067 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a446c38f-dc5a-4a87-ba82-3405c0aadae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.896662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b956c8c-f12f-4622-b67d-29349ba463aa-config\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.897546 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f25231da-465a-433f-9b0b-e37a23bc59b8-etcd-client\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.898702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-trusted-ca\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.898956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899297 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.899944 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.900810 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3da81853-6557-4653-87f8-c2423aeb3994-machine-approver-tls\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901811 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.901915 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c558c558-e8e3-4914-b88e-f5299916978f-config\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.902461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b956c8c-f12f-4622-b67d-29349ba463aa-etcd-client\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903037 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903478 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.903783 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.905321 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.905691 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.906893 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.907900 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.908786 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.908970 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.910125 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.911132 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.911508 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.912614 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.920776 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941421 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.941633 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.961293 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971351 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971416 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971576 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971623 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971655 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971733 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971759 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971779 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971822 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971893 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971958 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.971981 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972113 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972135 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972159 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972181 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972287 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972337 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972417 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972454 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972587 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972618 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972754 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.972933 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-serving-cert\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974407 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974447 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-images\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.974899 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8817d706-baea-4924-868d-c656652d9111-audit-dir\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975156 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-config\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975310 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bba258ca-d05a-417e-8a91-73e603062c20-config\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffc3478-ba63-46d9-a78d-728fd442a3a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.975919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8817d706-baea-4924-868d-c656652d9111-audit-policies\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bba258ca-d05a-417e-8a91-73e603062c20-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.976841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffc3478-ba63-46d9-a78d-728fd442a3a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.977069 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46902882-1cf1-4d7d-aa61-4502520d171f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.977487 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-serving-cert\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.981464 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-encryption-config\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.983105 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8817d706-baea-4924-868d-c656652d9111-etcd-client\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.989367 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 06:55:34 crc kubenswrapper[4675]: I0124 06:55:34.994130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.000667 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.002321 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b664684-ced7-4027-8050-6da6e83d0fd7-service-ca-bundle\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.021149 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.040934 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.045889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b664684-ced7-4027-8050-6da6e83d0fd7-serving-cert\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.060465 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.102018 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.120818 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.127184 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79a88693-6b36-417b-829e-50981ccff9f7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.140382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.161196 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.163430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79a88693-6b36-417b-829e-50981ccff9f7-config\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.180290 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.200618 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.220920 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.227753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbfb9d06-165a-4595-9422-d6b22e311ec2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.240854 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.260698 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.264860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbfb9d06-165a-4595-9422-d6b22e311ec2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.282308 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.301047 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.320378 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.341353 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.361955 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.380445 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.389356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.401668 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.406662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.421461 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.425964 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.446175 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.461525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.481561 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.487292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7266e9dc-a776-4331-a8d5-324deae3a589-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.501755 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.507041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7266e9dc-a776-4331-a8d5-324deae3a589-config\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.521597 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.541316 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.561689 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.582521 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.602045 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.609508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.621439 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.625452 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.641623 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.661994 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.681495 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.691440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46902882-1cf1-4d7d-aa61-4502520d171f-proxy-tls\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.701967 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.720642 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.726489 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-images\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.740983 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.761674 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.766417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-key\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.779116 4675 request.go:700] Waited for 1.000567456s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.781696 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.801677 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.804246 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-signing-cabundle\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.820912 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.840789 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.847785 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-proxy-tls\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.881903 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.901942 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.921452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.940616 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.941577 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.941667 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.960938 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 06:55:35 crc kubenswrapper[4675]: I0124 06:55:35.981154 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.000785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.021510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.041356 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.060661 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.080787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.101759 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.122385 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.141388 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.161610 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.181956 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.201144 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.221972 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.242011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.261438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.284779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.301008 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.331773 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.340942 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.361114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.381270 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.401086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.420797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.461338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxg8z\" (UniqueName: \"kubernetes.io/projected/4b956c8c-f12f-4622-b67d-29349ba463aa-kube-api-access-cxg8z\") pod \"apiserver-76f77b778f-s7phr\" (UID: \"4b956c8c-f12f-4622-b67d-29349ba463aa\") " pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.490601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55gd7\" (UniqueName: \"kubernetes.io/projected/3da81853-6557-4653-87f8-c2423aeb3994-kube-api-access-55gd7\") pod \"machine-approver-56656f9798-b8rtd\" (UID: \"3da81853-6557-4653-87f8-c2423aeb3994\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.501124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64c2d\" (UniqueName: \"kubernetes.io/projected/f25231da-465a-433f-9b0b-e37a23bc59b8-kube-api-access-64c2d\") pod \"etcd-operator-b45778765-m9tnc\" (UID: \"f25231da-465a-433f-9b0b-e37a23bc59b8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.508055 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.519709 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28rq\" (UniqueName: \"kubernetes.io/projected/3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe-kube-api-access-p28rq\") pod \"dns-operator-744455d44c-6kz26\" (UID: \"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe\") " pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.534360 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcc55\" (UniqueName: \"kubernetes.io/projected/c558c558-e8e3-4914-b88e-f5299916978f-kube-api-access-qcc55\") pod \"console-operator-58897d9998-ntpw9\" (UID: \"c558c558-e8e3-4914-b88e-f5299916978f\") " pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.554803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58kz\" (UniqueName: \"kubernetes.io/projected/41fa7730-1346-4cd6-bb9a-b93bb377047d-kube-api-access-l58kz\") pod \"openshift-apiserver-operator-796bbdcf4f-dn5v4\" (UID: \"41fa7730-1346-4cd6-bb9a-b93bb377047d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.576943 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.584929 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4p2m\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-kube-api-access-p4p2m\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.611561 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdj4\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-kube-api-access-fsdj4\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.666201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.671204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d05ecde1-a559-4a6e-8e2f-cdabe4865ce1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k98cw\" (UID: \"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.675440 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.684260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"console-f9d7485db-c64jl\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.688796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldkd\" (UniqueName: \"kubernetes.io/projected/927fb92f-2e72-4344-90e4-cfc0357135f4-kube-api-access-sldkd\") pod \"cluster-samples-operator-665b6dd947-r54lt\" (UID: \"927fb92f-2e72-4344-90e4-cfc0357135f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.690892 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"oauth-openshift-558db77b4-cnnh9\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.697339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a446c38f-dc5a-4a87-ba82-3405c0aadae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9tn\" (UID: \"a446c38f-dc5a-4a87-ba82-3405c0aadae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.699435 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.717196 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"controller-manager-879f6c89f-tgs5c\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.728006 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.738378 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snnx\" (UniqueName: \"kubernetes.io/projected/737c0ee8-629a-4935-8357-c321e1ff5a41-kube-api-access-9snnx\") pod \"router-default-5444994796-xwk6j\" (UID: \"737c0ee8-629a-4935-8357-c321e1ff5a41\") " pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.742906 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.746091 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.764693 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.779474 4675 request.go:700] Waited for 1.875246936s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.782774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.787271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.800587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.821629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"0a2ca7386eb6561b8002b4357ae9c07c7c6349ff30e417ea7491f45e6bde772f"} Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.821889 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.841214 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.845581 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.860654 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.885819 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.901637 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.908191 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.913795 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.920778 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.926917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.957437 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.964002 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 06:55:36 crc kubenswrapper[4675]: I0124 06:55:36.993936 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:36.998096 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.014050 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.041567 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.051408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnrwb\" (UniqueName: \"kubernetes.io/projected/4bfd659c-336a-4497-bb5b-eaf18b1118e3-kube-api-access-fnrwb\") pod \"downloads-7954f5f757-jntdn\" (UID: \"4bfd659c-336a-4497-bb5b-eaf18b1118e3\") " pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.073549 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7gw\" (UniqueName: \"kubernetes.io/projected/8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47-kube-api-access-4w7gw\") pod \"openshift-config-operator-7777fb866f-b7h9m\" (UID: \"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.074704 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s7phr"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.101837 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7266e9dc-a776-4331-a8d5-324deae3a589-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n66cj\" (UID: \"7266e9dc-a776-4331-a8d5-324deae3a589\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.102399 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m9tnc"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.108648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vngs\" (UniqueName: \"kubernetes.io/projected/ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe-kube-api-access-2vngs\") pod \"service-ca-9c57cc56f-f895q\" (UID: \"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe\") " pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.110415 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.119448 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b49v\" (UniqueName: \"kubernetes.io/projected/e71a6ca6-4ef8-4765-a0ae-0809a6343e38-kube-api-access-7b49v\") pod \"machine-config-operator-74547568cd-9mcxb\" (UID: \"e71a6ca6-4ef8-4765-a0ae-0809a6343e38\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:37 crc kubenswrapper[4675]: W0124 06:55:37.119587 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b956c8c_f12f_4622_b67d_29349ba463aa.slice/crio-34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8 WatchSource:0}: Error finding container 34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8: Status 404 returned error can't find the container with id 34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8 Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.136326 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.144636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcnb\" (UniqueName: \"kubernetes.io/projected/9dc9d935-27cf-4fac-804c-b80a9eb2d4a3-kube-api-access-zgcnb\") pod \"kube-storage-version-migrator-operator-b67b599dd-lwkzm\" (UID: \"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.147447 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6kz26"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.148536 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.155506 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.161340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptc9p\" (UniqueName: \"kubernetes.io/projected/8817d706-baea-4924-868d-c656652d9111-kube-api-access-ptc9p\") pod \"apiserver-7bbb656c7d-hjk5f\" (UID: \"8817d706-baea-4924-868d-c656652d9111\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.197822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pghrz\" (UniqueName: \"kubernetes.io/projected/bba258ca-d05a-417e-8a91-73e603062c20-kube-api-access-pghrz\") pod \"machine-api-operator-5694c8668f-577lm\" (UID: \"bba258ca-d05a-417e-8a91-73e603062c20\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.212139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbfb9d06-165a-4595-9422-d6b22e311ec2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-m2d8c\" (UID: \"cbfb9d06-165a-4595-9422-d6b22e311ec2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.220320 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ntpw9"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.221578 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.225176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"route-controller-manager-6576b87f9c-xcztf\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.235980 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlqvq\" (UniqueName: \"kubernetes.io/projected/46902882-1cf1-4d7d-aa61-4502520d171f-kube-api-access-hlqvq\") pod \"machine-config-controller-84d6567774-jz9jr\" (UID: \"46902882-1cf1-4d7d-aa61-4502520d171f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.260328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb44d\" (UniqueName: \"kubernetes.io/projected/1ffc3478-ba63-46d9-a78d-728fd442a3a2-kube-api-access-gb44d\") pod \"openshift-controller-manager-operator-756b6f6bc6-pcwsx\" (UID: \"1ffc3478-ba63-46d9-a78d-728fd442a3a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.291492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxn7\" (UniqueName: \"kubernetes.io/projected/0b664684-ced7-4027-8050-6da6e83d0fd7-kube-api-access-gpxn7\") pod \"authentication-operator-69f744f599-x7fgx\" (UID: \"0b664684-ced7-4027-8050-6da6e83d0fd7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.300013 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79a88693-6b36-417b-829e-50981ccff9f7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvcsm\" (UID: \"79a88693-6b36-417b-829e-50981ccff9f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.317582 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.336624 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.341055 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.361704 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.440118 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.443688 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.623472 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.632743 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj"] Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.690832 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.690965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.691068 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.695176 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.695955 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.696281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.696696 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.697073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.697485 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.700062 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.700821 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.701924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.701960 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702020 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702056 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702083 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.702201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.702633 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.202620553 +0000 UTC m=+139.498725776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.806007 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.806556 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.306528799 +0000 UTC m=+139.602634022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807741 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807928 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.807975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808043 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808087 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.808104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809570 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809588 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809605 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809668 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809695 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809755 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809954 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.809986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810013 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810031 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.810067 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813676 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.813689 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814081 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814164 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814280 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.814920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.815458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.816746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.818071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821607 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821687 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821797 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821907 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.821937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.822002 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.823320 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.323304198 +0000 UTC m=+139.619409421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.823734 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.855648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.872785 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerStarted","Data":"1baee155ca04c86836e94a8a309af90387ef167a0b3873a1f4bc0c4361aabb7d"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.876325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerStarted","Data":"9f62761dfa0e23278a88b4c9d7acb6c23e771672906712e8cf7b32e35ec90e90"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.877926 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.896330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerStarted","Data":"2617a8d5990bca5860fe83af255dca72d1f078c4ac17075407e8e2d08aa3e5d0"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.899508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" event={"ID":"41fa7730-1346-4cd6-bb9a-b93bb377047d","Type":"ContainerStarted","Data":"91f73bf7a6f5da9df7634d2bb39dc0bbffdf7270c481fccf40d8800da117b277"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.899541 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" event={"ID":"41fa7730-1346-4cd6-bb9a-b93bb377047d","Type":"ContainerStarted","Data":"77cc24fd875d6f548090a4f4aaeb9c6d197237aa968f23fd64f7288d7c5a628d"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.911170 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"34b9e256f0a363d6706552e34a6c518bfd3c5bc88908cd5eba01b6ceec6a3fd8"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.917638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"7cf4b45890be58180d5e97e3e7c1b66bcdbdabd51ec9db6736553c04ac14342e"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.923226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924072 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924116 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924146 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924195 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924221 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924308 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924333 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924363 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924400 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924422 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.924446 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.925578 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.42546259 +0000 UTC m=+139.721567873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.929119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-config\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938582 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938662 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938707 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938748 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938788 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938828 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938872 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938897 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938924 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.938992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939057 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939115 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939203 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939234 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939271 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.939452 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:37 crc kubenswrapper[4675]: E0124 06:55:37.939783 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.439769117 +0000 UTC m=+139.735874340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.940309 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-socket-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.941352 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948119 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181f90fa-40e7-4179-8866-6756a0cded18-config-volume\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948656 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-plugins-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.948994 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.958671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-serving-cert\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.964669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-mountpoint-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.964696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e08de50b-8092-4f29-b2a8-a391b4778142-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.965034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cb716cde-084c-490b-a28f-f35c40c0adbb-tmpfs\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.967870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-registration-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.968600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-csi-data-dir\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.969301 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.970860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.974076 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-webhook-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.980538 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"9cd3ec74e8e76aafab28c7545b7eaa9684532e7016de1d0cc3ea26114f1aeab0"} Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.988652 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-node-bootstrap-token\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.989181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-srv-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.989554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/04bf44e3-ad73-4db3-bf58-f4697644bef7-profile-collector-cert\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.990161 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.990242 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-cert\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.992670 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/181f90fa-40e7-4179-8866-6756a0cded18-metrics-tls\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.992835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.993386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cb716cde-084c-490b-a28f-f35c40c0adbb-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.995229 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm5zq\" (UniqueName: \"kubernetes.io/projected/5cbb9972-a73e-4826-9457-ae4f93b8d1c8-kube-api-access-gm5zq\") pod \"ingress-canary-24zjn\" (UID: \"5cbb9972-a73e-4826-9457-ae4f93b8d1c8\") " pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.997815 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6c264931-ec70-45fd-a7a3-979e2203eaf8-srv-cert\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.998180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-certs\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:37 crc kubenswrapper[4675]: I0124 06:55:37.998376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/77311272-8b70-4772-8e4d-9a5f7d94f104-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.001481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlxqt\" (UniqueName: \"kubernetes.io/projected/77311272-8b70-4772-8e4d-9a5f7d94f104-kube-api-access-qlxqt\") pod \"package-server-manager-789f6589d5-kwpzk\" (UID: \"77311272-8b70-4772-8e4d-9a5f7d94f104\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.001636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmnl\" (UniqueName: \"kubernetes.io/projected/6c264931-ec70-45fd-a7a3-979e2203eaf8-kube-api-access-wxmnl\") pod \"olm-operator-6b444d44fb-44bjw\" (UID: \"6c264931-ec70-45fd-a7a3-979e2203eaf8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.012920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" event={"ID":"f25231da-465a-433f-9b0b-e37a23bc59b8","Type":"ContainerStarted","Data":"9b4aaf327852c667a2ce3e6055048ca927fb6f9016c799733a2f7ec80637aacc"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.017900 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f895q"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.020508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"marketplace-operator-79b997595-cgv9v\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.033004 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwk6j" event={"ID":"737c0ee8-629a-4935-8357-c321e1ff5a41","Type":"ContainerStarted","Data":"77e69f0250fb15e98bc52e6cb574dc7c673aa1efbd9fd1c4efb7541542931034"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.033084 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwk6j" event={"ID":"737c0ee8-629a-4935-8357-c321e1ff5a41","Type":"ContainerStarted","Data":"4106f1c63accdee57ffae6d09bd304b56ef24ef274946518c05bf979248320d5"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.037804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" event={"ID":"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1","Type":"ContainerStarted","Data":"2dcdddbf1d4d92d24cf207099caabfd7b1e0fac7143ab29352cf476bb04d5cf0"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.040384 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" event={"ID":"c558c558-e8e3-4914-b88e-f5299916978f","Type":"ContainerStarted","Data":"b44b30232e383de6f55574b508cd739e9b5e68efa043a5cbf2db36bb22978236"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.040711 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.040922 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.540898473 +0000 UTC m=+139.837003696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.041066 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.041345 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.541334055 +0000 UTC m=+139.837439278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.048339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"42665a1813af5370cbc64f14f13ea2f1f0efbad5996ac8940983b9a839f5b75b"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.051120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"3d51378a0f167c9625d9b20bff5ee58972a623e113e0b9b83984fa63e85d9a2e"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.053627 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"002dc8c82822c04b6e62906f21869f215a480bf895ea5d7219771df426e65ddd"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.070892 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qt7k\" (UniqueName: \"kubernetes.io/projected/b9d48866-3fcd-4d12-83a2-2aee6060d4c4-kube-api-access-2qt7k\") pod \"migrator-59844c95c7-8hf2l\" (UID: \"b9d48866-3fcd-4d12-83a2-2aee6060d4c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.074359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" event={"ID":"7266e9dc-a776-4331-a8d5-324deae3a589","Type":"ContainerStarted","Data":"f7c8eb31ac93a3b2e00ccb0ea8038981a27cd244cb8fddd45dee178b4cea1986"} Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.075386 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.082877 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.090034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgc22\" (UniqueName: \"kubernetes.io/projected/181f90fa-40e7-4179-8866-6756a0cded18-kube-api-access-cgc22\") pod \"dns-default-m8cvs\" (UID: \"181f90fa-40e7-4179-8866-6756a0cded18\") " pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.091087 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.101400 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p66\" (UniqueName: \"kubernetes.io/projected/c97bb9d5-f9c0-46b1-a678-d07bbd5d641b-kube-api-access-l4p66\") pod \"machine-config-server-9zjhs\" (UID: \"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b\") " pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.122021 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wmrp\" (UniqueName: \"kubernetes.io/projected/04bf44e3-ad73-4db3-bf58-f4697644bef7-kube-api-access-5wmrp\") pod \"catalog-operator-68c6474976-xwpc2\" (UID: \"04bf44e3-ad73-4db3-bf58-f4697644bef7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.129969 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.130319 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.137917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.145070 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.145570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9zjhs" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.147264 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.64723919 +0000 UTC m=+139.943344413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.147816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.148106 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.648097161 +0000 UTC m=+139.944202384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.150804 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.153335 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.163690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmqr\" (UniqueName: \"kubernetes.io/projected/e08de50b-8092-4f29-b2a8-a391b4778142-kube-api-access-llmqr\") pod \"control-plane-machine-set-operator-78cbb6b69f-kdjm5\" (UID: \"e08de50b-8092-4f29-b2a8-a391b4778142\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.170712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7vs\" (UniqueName: \"kubernetes.io/projected/c84c3367-bd13-4a3a-b8d0-c4a9157ee38f-kube-api-access-lg7vs\") pod \"csi-hostpathplugin-pn69w\" (UID: \"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f\") " pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.174144 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" Jan 24 06:55:38 crc kubenswrapper[4675]: W0124 06:55:38.183305 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1e1be6_05e8_4a21_9fff_f6f8437c4ebe.slice/crio-3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de WatchSource:0}: Error finding container 3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de: Status 404 returned error can't find the container with id 3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.183539 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24zjn" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.184470 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z56t\" (UniqueName: \"kubernetes.io/projected/cb716cde-084c-490b-a28f-f35c40c0adbb-kube-api-access-5z56t\") pod \"packageserver-d55dfcdfc-v7d6k\" (UID: \"cb716cde-084c-490b-a28f-f35c40c0adbb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.221427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglzh\" (UniqueName: \"kubernetes.io/projected/aae781f0-edcc-4ea7-8bc5-aa2053d9dc39-kube-api-access-jglzh\") pod \"service-ca-operator-777779d784-4sb9w\" (UID: \"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.241295 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"collect-profiles-29487285-lfs59\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.258665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.259871 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.759853323 +0000 UTC m=+140.055958546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.335035 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.360405 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.360726 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.860700463 +0000 UTC m=+140.156805686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.365959 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.377144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x7fgx"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.400038 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.408073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.421980 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.431274 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxnm\" (UniqueName: \"kubernetes.io/projected/0a6820b1-d17b-4bf8-961e-ff96d8e79b72-kube-api-access-ksxnm\") pod \"multus-admission-controller-857f4d67dd-rmdpv\" (UID: \"0a6820b1-d17b-4bf8-961e-ff96d8e79b72\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.461615 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.463743 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:38.963680745 +0000 UTC m=+140.259785968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: W0124 06:55:38.468877 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a88693_6b36_417b_829e_50981ccff9f7.slice/crio-d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812 WatchSource:0}: Error finding container d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812: Status 404 returned error can't find the container with id d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812 Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.517440 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.565225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.565562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.06554507 +0000 UTC m=+140.361650293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.588369 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.622109 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-577lm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630055 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630093 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.630589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.647019 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.667059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.667320 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.167305212 +0000 UTC m=+140.463410435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.715001 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.768448 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.773268 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.273250028 +0000 UTC m=+140.569355251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.774212 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.863741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jntdn"] Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.884654 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:38 crc kubenswrapper[4675]: E0124 06:55:38.885214 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.385198395 +0000 UTC m=+140.681303618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.903829 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.915346 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:38 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:38 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:38 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:38 crc kubenswrapper[4675]: I0124 06:55:38.915394 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.013853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.014517 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.514487085 +0000 UTC m=+140.810592308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.041998 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.116876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.117178 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.617153099 +0000 UTC m=+140.913258322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.118124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.118494 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.618483943 +0000 UTC m=+140.914589166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.168675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" event={"ID":"0b664684-ced7-4027-8050-6da6e83d0fd7","Type":"ContainerStarted","Data":"ef7302979665f1075ad64fec9fd7e10f599f023692381dcc8fae43558a916682"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.173201 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" event={"ID":"cbfb9d06-165a-4595-9422-d6b22e311ec2","Type":"ContainerStarted","Data":"31d19390d2e422321d73730eed9f9a86324c1a2a61cb9ee87c93467d9b46acfd"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.174772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerStarted","Data":"e564a07a05a469608a051eeb2837bad6d24efdae469ccf4cf55bd5adac7d9867"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.218549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.218991 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.718974033 +0000 UTC m=+141.015079256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.234370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerStarted","Data":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.238053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.242916 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerStarted","Data":"667091296b006971a6a8c293d0bcb09ff6c8bf8438be76e72de64a5e6a9d2113"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.247482 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.247522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"f9521d7703eec808be2541f0783a9d10ef64dbf629866b0ef02783c511535335"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.300331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"5721e943166bebaa714e58987f2c2c1cab9d09ebfcd35e44307ea104f4abba36"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.319710 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.320427 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.321103 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.821090904 +0000 UTC m=+141.117196127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.323706 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" event={"ID":"1ffc3478-ba63-46d9-a78d-728fd442a3a2","Type":"ContainerStarted","Data":"6d353e639a9acc9b24c141b15cb4c0467a78f6ebf6a25048129c11a485b6ee4e"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.345858 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" event={"ID":"79a88693-6b36-417b-829e-50981ccff9f7","Type":"ContainerStarted","Data":"d12df7158378d62533ea52ad329cd1f124073081c95da8c9cc018fb3d0627812"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.354293 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b956c8c-f12f-4622-b67d-29349ba463aa" containerID="a1f730c602e0c7b223d6d5879afdd9c311ddb91bc4532df9ccce3e2acc2e7031" exitCode=0 Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.354357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerDied","Data":"a1f730c602e0c7b223d6d5879afdd9c311ddb91bc4532df9ccce3e2acc2e7031"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.366772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" event={"ID":"f25231da-465a-433f-9b0b-e37a23bc59b8","Type":"ContainerStarted","Data":"4216243447ad9603723722b695208fab30625a2e50ba52af5149656da6860268"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.368108 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"e84e64905e1f5cc564577a182a5835196904762cdb0372049f57bbc763f06b85"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.369251 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" event={"ID":"d05ecde1-a559-4a6e-8e2f-cdabe4865ce1","Type":"ContainerStarted","Data":"8ad8c70d2b29dc17627fb5a5f59a09c7979a056a6b600dc8c0a57fcb3dd3a090"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.373170 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerStarted","Data":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.374075 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.380204 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" event={"ID":"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe","Type":"ContainerStarted","Data":"3b1b005206e13a49119e1c4a21a9b8f763576ba3949f3145ead0c3b6cf6f70de"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.382863 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" event={"ID":"c558c558-e8e3-4914-b88e-f5299916978f","Type":"ContainerStarted","Data":"22a13064c0d84be6a2c53890c19bb9f23b1b6ec4ff340a20aa558526065f733a"} Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.383180 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.385984 4675 patch_prober.go:28] interesting pod/console-operator-58897d9998-ntpw9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.386049 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" podUID="c558c558-e8e3-4914-b88e-f5299916978f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.390260 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.413845 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d48866_3fcd_4d12_83a2_2aee6060d4c4.slice/crio-19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66 WatchSource:0}: Error finding container 19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66: Status 404 returned error can't find the container with id 19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66 Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.414150 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08de50b_8092_4f29_b2a8_a391b4778142.slice/crio-5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe WatchSource:0}: Error finding container 5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe: Status 404 returned error can't find the container with id 5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.425102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.425975 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:39.925959243 +0000 UTC m=+141.222064456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: W0124 06:55:39.442073 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb716cde_084c_490b_a28f_f35c40c0adbb.slice/crio-dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f WatchSource:0}: Error finding container dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f: Status 404 returned error can't find the container with id dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.470042 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dn5v4" podStartSLOduration=122.470021704 podStartE2EDuration="2m2.470021704s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.469373838 +0000 UTC m=+140.765479061" watchObservedRunningTime="2026-01-24 06:55:39.470021704 +0000 UTC m=+140.766126927" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.527890 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.529971 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.029955232 +0000 UTC m=+141.326060455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.629124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.641830 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xwk6j" podStartSLOduration=122.641812566 podStartE2EDuration="2m2.641812566s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.639550519 +0000 UTC m=+140.935655752" watchObservedRunningTime="2026-01-24 06:55:39.641812566 +0000 UTC m=+140.937917789" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.645282 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.145252332 +0000 UTC m=+141.441357555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.748284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.763449 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.263387422 +0000 UTC m=+141.559492645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.819211 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c64jl" podStartSLOduration=122.819180126 podStartE2EDuration="2m2.819180126s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.816774127 +0000 UTC m=+141.112879350" watchObservedRunningTime="2026-01-24 06:55:39.819180126 +0000 UTC m=+141.115285349" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.850657 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.851617 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.351597406 +0000 UTC m=+141.647702629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.859529 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k98cw" podStartSLOduration=122.859509694 podStartE2EDuration="2m2.859509694s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.855626837 +0000 UTC m=+141.151732060" watchObservedRunningTime="2026-01-24 06:55:39.859509694 +0000 UTC m=+141.155614917" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.904014 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk"] Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.920609 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:39 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:39 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:39 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.920666 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:39 crc kubenswrapper[4675]: I0124 06:55:39.952495 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:39 crc kubenswrapper[4675]: E0124 06:55:39.952823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.452810725 +0000 UTC m=+141.748915948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.053368 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.053585 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.553562092 +0000 UTC m=+141.849667315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.118049 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" podStartSLOduration=123.118018822 podStartE2EDuration="2m3.118018822s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:39.949083492 +0000 UTC m=+141.245188715" watchObservedRunningTime="2026-01-24 06:55:40.118018822 +0000 UTC m=+141.414124045" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.120438 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" podStartSLOduration=123.120431402 podStartE2EDuration="2m3.120431402s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.119159401 +0000 UTC m=+141.415264624" watchObservedRunningTime="2026-01-24 06:55:40.120431402 +0000 UTC m=+141.416536615" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.154837 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.155185 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.65517055 +0000 UTC m=+141.951275773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.257327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.257937 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.757921487 +0000 UTC m=+142.054026710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.363898 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m9tnc" podStartSLOduration=123.363870193 podStartE2EDuration="2m3.363870193s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.318421408 +0000 UTC m=+141.614526631" watchObservedRunningTime="2026-01-24 06:55:40.363870193 +0000 UTC m=+141.659975416" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.366681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.367242 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.367579 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.867568146 +0000 UTC m=+142.163673369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.425200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"c89a2721bd8c45a1ac00a40e59f297434d8e9b2e853e8ba04189b5a93ad55a5d"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.426340 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"2645ff1e667ec7a9d8e38bc4961da57d67d0bb2a37e3707c6bfb6c07c3ca3054"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.427110 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jntdn" event={"ID":"4bfd659c-336a-4497-bb5b-eaf18b1118e3","Type":"ContainerStarted","Data":"f3a85470a4355bfe8eac6b16acf14d7120cdd9f3e9447ad6d3620b88fdd0f89d"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.434905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerStarted","Data":"983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.436137 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.443355 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m8cvs"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.463727 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" event={"ID":"cb716cde-084c-490b-a28f-f35c40c0adbb","Type":"ContainerStarted","Data":"dfd63c687515dbdbd17bee729e3a8d6ac7ff31af3811b055dd61388b0848974f"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.479200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.480163 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:40.980136208 +0000 UTC m=+142.276241431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.487937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerStarted","Data":"d68bea8b00c026526be03f959939477c57040be0a40f40783ac0e65d642a96db"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.497026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" event={"ID":"e08de50b-8092-4f29-b2a8-a391b4778142","Type":"ContainerStarted","Data":"5bfe94c59b12b72d0659b659a550348b7c9b29ff0f08fc563d936f8cfcfa6bbe"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.519328 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" podStartSLOduration=123.519310066 podStartE2EDuration="2m3.519310066s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.507056971 +0000 UTC m=+141.803162194" watchObservedRunningTime="2026-01-24 06:55:40.519310066 +0000 UTC m=+141.815415289" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.530649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zjhs" event={"ID":"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b","Type":"ContainerStarted","Data":"af29d3bf9d739cd26d1a78b936ffb5d04a1c781d3a1b82a5e20a1b0ab7e39654"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.581430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.581685 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.081674885 +0000 UTC m=+142.377780108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.590298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" event={"ID":"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3","Type":"ContainerStarted","Data":"90b5259e216fde7ceb68fc43e10a268392f195bed3c8e5ffd441abf1bb5d215b"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.594788 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.613199 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.660510 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw"] Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.684932 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.685562 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.185546029 +0000 UTC m=+142.481651252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.748330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" event={"ID":"7266e9dc-a776-4331-a8d5-324deae3a589","Type":"ContainerStarted","Data":"c5f33b8de9a539f9d5dab31ee044f987c0098468e34d182e68c9394590c9a407"} Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.788049 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.789266 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.28925413 +0000 UTC m=+142.585359353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.797673 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n66cj" podStartSLOduration=123.79765385 podStartE2EDuration="2m3.79765385s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.796510221 +0000 UTC m=+142.092615444" watchObservedRunningTime="2026-01-24 06:55:40.79765385 +0000 UTC m=+142.093759073" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.873074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" event={"ID":"1ffc3478-ba63-46d9-a78d-728fd442a3a2","Type":"ContainerStarted","Data":"b6f24f0c0b21dab8940c7c17f138d5ab2470b6d7da019aba5adc5a75d5f5f6ed"} Jan 24 06:55:40 crc kubenswrapper[4675]: W0124 06:55:40.873658 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a4e6f5_492a_4b32_aa94_c8eca20b0067.slice/crio-4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8 WatchSource:0}: Error finding container 4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8: Status 404 returned error can't find the container with id 4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8 Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.890282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.890388 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.390369286 +0000 UTC m=+142.686474509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.890611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:40 crc kubenswrapper[4675]: E0124 06:55:40.890927 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.39092038 +0000 UTC m=+142.687025603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.917183 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:40 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:40 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:40 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.917233 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:40 crc kubenswrapper[4675]: I0124 06:55:40.992975 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:40.998322 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.498292702 +0000 UTC m=+142.794397925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:40.998422 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:40.998839 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.498826916 +0000 UTC m=+142.794932139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.039950 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"19b4fc5a44db15e45014c4d32cc04eaa79330753333c4008daa7dfe8c27e8b66"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.057104 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pcwsx" podStartSLOduration=124.05708849 podStartE2EDuration="2m4.05708849s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:40.915056993 +0000 UTC m=+142.211162216" watchObservedRunningTime="2026-01-24 06:55:41.05708849 +0000 UTC m=+142.353193713" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.058993 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pn69w"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.059027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" event={"ID":"3da81853-6557-4653-87f8-c2423aeb3994","Type":"ContainerStarted","Data":"9baf934efad6645f67d8e90940cb3419ae8138e3aafe50f06ba093388c5ea163"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.104433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.105474 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.605456129 +0000 UTC m=+142.901561352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.115805 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"2c5d5a450c0cf663939a529a8839c7748144ae97ebc8568728a1faad33770d9b"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.124448 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b8rtd" podStartSLOduration=124.124428724 podStartE2EDuration="2m4.124428724s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:41.120505245 +0000 UTC m=+142.416610468" watchObservedRunningTime="2026-01-24 06:55:41.124428724 +0000 UTC m=+142.420533947" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.124832 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.154167 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"e2d6bc71e9515d71fb3682c2840c077be52e05839a512b7ce9e2a477233b68ea"} Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.177926 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rmdpv"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.206555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.207785 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.707773735 +0000 UTC m=+143.003878948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.235670 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:55:41 crc kubenswrapper[4675]: W0124 06:55:41.288017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6820b1_d17b_4bf8_961e_ff96d8e79b72.slice/crio-ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f WatchSource:0}: Error finding container ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f: Status 404 returned error can't find the container with id ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.308048 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.308479 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.80844961 +0000 UTC m=+143.104554833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.323051 4675 csr.go:261] certificate signing request csr-kvtwg is approved, waiting to be issued Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.341128 4675 csr.go:257] certificate signing request csr-kvtwg is issued Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.404198 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24zjn"] Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.410443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.410789 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:41.910777636 +0000 UTC m=+143.206882859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.514326 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ntpw9" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.514789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.515204 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.015187195 +0000 UTC m=+143.311292418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.618789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.619433 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.119421769 +0000 UTC m=+143.415526992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.727200 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.727639 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.227620751 +0000 UTC m=+143.523725974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.833850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.834169 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.334157523 +0000 UTC m=+143.630262746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.908910 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:41 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:41 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:41 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.909219 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:41 crc kubenswrapper[4675]: I0124 06:55:41.935105 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:41 crc kubenswrapper[4675]: E0124 06:55:41.935506 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.435490445 +0000 UTC m=+143.731595658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.036319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.036750 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.536712003 +0000 UTC m=+143.832817216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.136986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.137302 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.637275635 +0000 UTC m=+143.933380858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.137519 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.137851 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.637838509 +0000 UTC m=+143.933943722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.197066 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerStarted","Data":"a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.197121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerStarted","Data":"50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.238905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.240396 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.740376641 +0000 UTC m=+144.036481864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.247341 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"6f9a6e5f01faa4072f12fcaaab2993c79791c5ae04f9d859c83f5b09af2afb4a"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.255197 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.277880 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7d6k container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.277936 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podUID="cb716cde-084c-490b-a28f-f35c40c0adbb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.294878 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podStartSLOduration=124.294856891 podStartE2EDuration="2m4.294856891s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.293521978 +0000 UTC m=+143.589627201" watchObservedRunningTime="2026-01-24 06:55:42.294856891 +0000 UTC m=+143.590962114" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.295984 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" podStartSLOduration=125.295978019 podStartE2EDuration="2m5.295978019s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.247630852 +0000 UTC m=+143.543736075" watchObservedRunningTime="2026-01-24 06:55:42.295978019 +0000 UTC m=+143.592083242" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.302128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"ae72bad0b37788d54a2e3b0b9747c63e0bfeeadfafb6e5276baf401dc53a5940"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.334323 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" event={"ID":"e71a6ca6-4ef8-4765-a0ae-0809a6343e38","Type":"ContainerStarted","Data":"40a2f0264fb19936349e3ea9e0013037ca5660c7a2a8449535225be5bf72b044"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.340375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.341026 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.841015604 +0000 UTC m=+144.137120827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.343345 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-24 06:50:41 +0000 UTC, rotation deadline is 2026-11-20 20:03:12.740702255 +0000 UTC Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.343375 4675 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7213h7m30.397329801s for next certificate rotation Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.352787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" event={"ID":"79a88693-6b36-417b-829e-50981ccff9f7","Type":"ContainerStarted","Data":"6595c5d705627f9456697d939f600b16c57bd7da43209cb30845009869d536a4"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.360306 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" event={"ID":"04bf44e3-ad73-4db3-bf58-f4697644bef7","Type":"ContainerStarted","Data":"2df4ba4ab05551df02515bab6ecbed4f60d3ed1ded667870ce8762b86eb00844"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.369592 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" event={"ID":"927fb92f-2e72-4344-90e4-cfc0357135f4","Type":"ContainerStarted","Data":"b915f19b17bc9aa527d39a1ddad01e14b8c15f51f08e54091a438bbabdfa7c28"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.371862 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47" containerID="fcc9c96fba47d3b6cf2d8428a262fd9072cd577ebc17d43cc1648f28c68c802d" exitCode=0 Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.372040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerDied","Data":"fcc9c96fba47d3b6cf2d8428a262fd9072cd577ebc17d43cc1648f28c68c802d"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.386155 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" event={"ID":"0b664684-ced7-4027-8050-6da6e83d0fd7","Type":"ContainerStarted","Data":"2b696e3a0fbe09c5ecdc40742aa697ab1d4484b0bd9a2898dca77eb4588d009c"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.399776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jntdn" event={"ID":"4bfd659c-336a-4497-bb5b-eaf18b1118e3","Type":"ContainerStarted","Data":"1932b5cccfff107a78f3758e0d5ff30c3c2c77803215de69b8db70abaf7d4c53"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.400297 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.409214 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvcsm" podStartSLOduration=125.409193168 podStartE2EDuration="2m5.409193168s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.403969547 +0000 UTC m=+143.700074790" watchObservedRunningTime="2026-01-24 06:55:42.409193168 +0000 UTC m=+143.705298391" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.410014 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9mcxb" podStartSLOduration=124.410007089 podStartE2EDuration="2m4.410007089s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.366371919 +0000 UTC m=+143.662477142" watchObservedRunningTime="2026-01-24 06:55:42.410007089 +0000 UTC m=+143.706112312" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.419881 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.420190 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.423087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" event={"ID":"e08de50b-8092-4f29-b2a8-a391b4778142","Type":"ContainerStarted","Data":"cf163a0f075902fa516be1d8410b4cb6fcb6f82c9c111fcd8ad71a4322c13abc"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.441613 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"9e1f8e7dfb05d61fadb8e3972bd9b70c01709c8e550db9808258c0f2398b23c3"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.442742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.442823 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.942804467 +0000 UTC m=+144.238909690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.443493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.444536 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:42.94452479 +0000 UTC m=+144.240630013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.456112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" event={"ID":"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39","Type":"ContainerStarted","Data":"a13aef70a8fa20b9b60955af495cd0e0680bc4d0d02aa29ac1a4c51bd8644a89"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.515252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24zjn" event={"ID":"5cbb9972-a73e-4826-9457-ae4f93b8d1c8","Type":"ContainerStarted","Data":"63c2e35c319c9a61a99d53105c17489b19ea3324117691625952d34d70aa2ee1"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.530824 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x7fgx" podStartSLOduration=125.530810986 podStartE2EDuration="2m5.530810986s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.510101969 +0000 UTC m=+143.806207192" watchObservedRunningTime="2026-01-24 06:55:42.530810986 +0000 UTC m=+143.826916199" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.533237 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" event={"ID":"9dc9d935-27cf-4fac-804c-b80a9eb2d4a3","Type":"ContainerStarted","Data":"f27948e4990ae52d60fac135f3620df2b0cf9fd5690e045f94839123007012a5"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.546386 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jntdn" podStartSLOduration=125.546372385 podStartE2EDuration="2m5.546372385s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.54058224 +0000 UTC m=+143.836687463" watchObservedRunningTime="2026-01-24 06:55:42.546372385 +0000 UTC m=+143.842477608" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.547572 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.548157 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.048132198 +0000 UTC m=+144.344237421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.557832 4675 generic.go:334] "Generic (PLEG): container finished" podID="8817d706-baea-4924-868d-c656652d9111" containerID="eef6628a01ed135f4f1fcc0eede9318ddd5643d944e656217a557f210613fbb9" exitCode=0 Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.558579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerDied","Data":"eef6628a01ed135f4f1fcc0eede9318ddd5643d944e656217a557f210613fbb9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.587022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerStarted","Data":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.588799 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.600299 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"ce735703dd3f2d57eaf17ce526990e9a8ffb509958d0524bef38c490323d2c9f"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.601348 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r54lt" podStartSLOduration=125.601331917 podStartE2EDuration="2m5.601331917s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.600260671 +0000 UTC m=+143.896365894" watchObservedRunningTime="2026-01-24 06:55:42.601331917 +0000 UTC m=+143.897437140" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.649853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.651392 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.151375818 +0000 UTC m=+144.447481041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.651409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"cdeb4071b2347fae17da5d3dbfceb41d85a6b13a2fa794af07c6fce6b9ed5896"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.712023 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"bb78eda42cd1348958f68737828b3a16fb9d934ca4018caa1363e8e321deeb06"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.712074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" event={"ID":"b9d48866-3fcd-4d12-83a2-2aee6060d4c4","Type":"ContainerStarted","Data":"3e2c864159ef50471155edbccbe526f4f16102eb5c37b262e968b15cc2b211e8"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.752543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" event={"ID":"ec1e1be6-05e8-4a21-9fff-f6f8437c4ebe","Type":"ContainerStarted","Data":"95ff64ff32d0b6a4820c854f1501312cd07b26344cd58f562b81d1798ba151b3"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.753349 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.754349 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.2543346 +0000 UTC m=+144.550439823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.784820 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" podStartSLOduration=124.784801231 podStartE2EDuration="2m4.784801231s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.713997873 +0000 UTC m=+144.010103106" watchObservedRunningTime="2026-01-24 06:55:42.784801231 +0000 UTC m=+144.080906454" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.800983 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"7b1087015054fbea2d83191977fb3690c25eb2522a0c443012dcd1aed9683dc9"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.802053 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.831171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" event={"ID":"6c264931-ec70-45fd-a7a3-979e2203eaf8","Type":"ContainerStarted","Data":"53d4a10d69c2a6d72ccb7dd578d52fda686b560259acc38eb525b6e7443a8858"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.831223 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" event={"ID":"6c264931-ec70-45fd-a7a3-979e2203eaf8","Type":"ContainerStarted","Data":"2454212a019b938256d7bccbd22f8cc00082ae25e1fdebbe375edf44738a7cdd"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.832787 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.853233 4675 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-44bjw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.853304 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" podUID="6c264931-ec70-45fd-a7a3-979e2203eaf8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.854790 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.855099 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.355086047 +0000 UTC m=+144.651191280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.869761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" event={"ID":"cbfb9d06-165a-4595-9422-d6b22e311ec2","Type":"ContainerStarted","Data":"27c8b4ced38ae2441edede0de818c34a694d2df8abb5521d64288ed585956940"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.878757 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kdjm5" podStartSLOduration=124.878739877 podStartE2EDuration="2m4.878739877s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.79036356 +0000 UTC m=+144.086468783" watchObservedRunningTime="2026-01-24 06:55:42.878739877 +0000 UTC m=+144.174845120" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921035 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:42 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:42 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:42 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921089 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921763 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerStarted","Data":"eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921797 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerStarted","Data":"4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8"} Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.921785 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" podStartSLOduration=124.921772843 podStartE2EDuration="2m4.921772843s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.881161008 +0000 UTC m=+144.177266221" watchObservedRunningTime="2026-01-24 06:55:42.921772843 +0000 UTC m=+144.217878066" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.922515 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.947140 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.947190 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.966209 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:42 crc kubenswrapper[4675]: E0124 06:55:42.967212 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.467195078 +0000 UTC m=+144.763300301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:42 crc kubenswrapper[4675]: I0124 06:55:42.986818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" event={"ID":"3bb54ee5-6b4a-4ec8-931c-c61e4c3da2fe","Type":"ContainerStarted","Data":"8ebbdbbb0c150931aeb63fdcc192541ce8e368dff55b99ac948f0eb95f92e221"} Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.024132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" event={"ID":"a446c38f-dc5a-4a87-ba82-3405c0aadae7","Type":"ContainerStarted","Data":"28e075f2b0b0e3d87573a00f5dbd8bd3559dd7b4ac8bfa1adeccc546fba4e100"} Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.024842 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-24zjn" podStartSLOduration=9.024827877 podStartE2EDuration="9.024827877s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:42.986944821 +0000 UTC m=+144.283050054" watchObservedRunningTime="2026-01-24 06:55:43.024827877 +0000 UTC m=+144.320933100" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.070493 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.072189 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.072197 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.572184621 +0000 UTC m=+144.868289844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.148793 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lwkzm" podStartSLOduration=126.148772143 podStartE2EDuration="2m6.148772143s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.024823677 +0000 UTC m=+144.320928900" watchObservedRunningTime="2026-01-24 06:55:43.148772143 +0000 UTC m=+144.444877366" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.149369 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2l" podStartSLOduration=125.149363438 podStartE2EDuration="2m5.149363438s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.13980723 +0000 UTC m=+144.435912453" watchObservedRunningTime="2026-01-24 06:55:43.149363438 +0000 UTC m=+144.445468661" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.173298 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.175189 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.675168963 +0000 UTC m=+144.971274186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.194928 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-m2d8c" podStartSLOduration=126.194912226 podStartE2EDuration="2m6.194912226s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.193425439 +0000 UTC m=+144.489530662" watchObservedRunningTime="2026-01-24 06:55:43.194912226 +0000 UTC m=+144.491017449" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.283606 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f895q" podStartSLOduration=125.283588672 podStartE2EDuration="2m5.283588672s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.241995772 +0000 UTC m=+144.538100995" watchObservedRunningTime="2026-01-24 06:55:43.283588672 +0000 UTC m=+144.579693895" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.284298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.284695 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.784679388 +0000 UTC m=+145.080784611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.391211 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.391771 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.891754393 +0000 UTC m=+145.187859616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.429064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9tn" podStartSLOduration=126.429049525 podStartE2EDuration="2m6.429049525s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.423291841 +0000 UTC m=+144.719397064" watchObservedRunningTime="2026-01-24 06:55:43.429049525 +0000 UTC m=+144.725154748" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.429676 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6kz26" podStartSLOduration=126.429670271 podStartE2EDuration="2m6.429670271s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.287818337 +0000 UTC m=+144.583923560" watchObservedRunningTime="2026-01-24 06:55:43.429670271 +0000 UTC m=+144.725775494" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.490262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" podStartSLOduration=125.490242833 podStartE2EDuration="2m5.490242833s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.478038378 +0000 UTC m=+144.774143601" watchObservedRunningTime="2026-01-24 06:55:43.490242833 +0000 UTC m=+144.786348066" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.492496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.492812 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:43.992800428 +0000 UTC m=+145.288905651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.587130 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podStartSLOduration=125.587115923 podStartE2EDuration="2m5.587115923s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.545623227 +0000 UTC m=+144.841728450" watchObservedRunningTime="2026-01-24 06:55:43.587115923 +0000 UTC m=+144.883221146" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.588838 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" podStartSLOduration=125.588833407 podStartE2EDuration="2m5.588833407s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.585956095 +0000 UTC m=+144.882061308" watchObservedRunningTime="2026-01-24 06:55:43.588833407 +0000 UTC m=+144.884938630" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.593326 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.593504 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.093478923 +0000 UTC m=+145.389584146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.593586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.593919 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.093911343 +0000 UTC m=+145.390016556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.616014 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" podStartSLOduration=125.615998785 podStartE2EDuration="2m5.615998785s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:43.615856702 +0000 UTC m=+144.911961935" watchObservedRunningTime="2026-01-24 06:55:43.615998785 +0000 UTC m=+144.912104008" Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.694871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.695085 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.19505715 +0000 UTC m=+145.491162373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.695296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.695728 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.195695886 +0000 UTC m=+145.491801109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.796876 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.797010 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.296991626 +0000 UTC m=+145.593096849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.797114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.797463 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.297453949 +0000 UTC m=+145.593559172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.898686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:43 crc kubenswrapper[4675]: E0124 06:55:43.899104 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.399089027 +0000 UTC m=+145.695194250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.906438 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:43 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:43 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:43 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:43 crc kubenswrapper[4675]: I0124 06:55:43.906490 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.000061 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.000396 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.500384378 +0000 UTC m=+145.796489611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.086150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" event={"ID":"cb716cde-084c-490b-a28f-f35c40c0adbb","Type":"ContainerStarted","Data":"295109398b966303fc6252dcbd66040bc73e914ef4d33d0da91ed4040c1b70fd"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.088813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jz9jr" event={"ID":"46902882-1cf1-4d7d-aa61-4502520d171f","Type":"ContainerStarted","Data":"f1caaca62e8b38a020d9b4a1eb6772a6d7cea588dd4daf5947509156a1f7a16c"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.090112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9zjhs" event={"ID":"c97bb9d5-f9c0-46b1-a678-d07bbd5d641b","Type":"ContainerStarted","Data":"e9e740d3a132526908b8720bb17f2a3851e13ef991b71779474cc296a17045d6"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.091252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4sb9w" event={"ID":"aae781f0-edcc-4ea7-8bc5-aa2053d9dc39","Type":"ContainerStarted","Data":"d4dd9a29466928a9c0b9e9fe28dddaf91099cf4147f87e645539a7f9a1a82847"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.092938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" event={"ID":"8817d706-baea-4924-868d-c656652d9111","Type":"ContainerStarted","Data":"3313826e4b98685bb861cc2e2f131b247e1d9e92b556788fba6f96e0787f1740"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.094514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" event={"ID":"04bf44e3-ad73-4db3-bf58-f4697644bef7","Type":"ContainerStarted","Data":"3f5b2bcc177dfad596e29552f4fe678709ee3f753687226f9fa03acd57432393"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.095107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.096962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" event={"ID":"bba258ca-d05a-417e-8a91-73e603062c20","Type":"ContainerStarted","Data":"1aa5dfc577d5005407ec464568b7f015321b23b063a2757a79bd26dffd9ff55e"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.099829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" event={"ID":"8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47","Type":"ContainerStarted","Data":"aebee9ecc8fc808840adcbceb7d3f00740ec5905aea20caf9529c8aa384e7173"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.099902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.100687 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.100807 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.600790596 +0000 UTC m=+145.896895819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.100997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.101301 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.601293009 +0000 UTC m=+145.897398232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.101924 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"2086666a390c118d19c489c59dcf471cccfa21b7532afa5edb2ae0283c9f7fe9"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.101956 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m8cvs" event={"ID":"181f90fa-40e7-4179-8866-6756a0cded18","Type":"ContainerStarted","Data":"ff78880942f01c00143fd5ef30061c3be3692bc4821f3108aea6bac4f8c3ada5"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.102008 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.103378 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24zjn" event={"ID":"5cbb9972-a73e-4826-9457-ae4f93b8d1c8","Type":"ContainerStarted","Data":"06c7f1cd340d501a7f56eb462783c74556dedefa56dba3599e3235f3a5767d57"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.104922 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" event={"ID":"77311272-8b70-4772-8e4d-9a5f7d94f104","Type":"ContainerStarted","Data":"500e6fc86b5b02eebeb9356e6c93f528f10b9a56090d8025e89eede189cdfe1b"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.107098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"5ff78c16547708a8e9e7608592ee98488027b26a259bf3267a5230de2dcde65a"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.109348 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"1b50988bb14c7407f499b4a56fd36a9378bccfbd57513778951be0ec7f22e0f4"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.109475 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" event={"ID":"4b956c8c-f12f-4622-b67d-29349ba463aa","Type":"ContainerStarted","Data":"55249ffde6422eeafbb4a0bce8213065861aa16362d7b7757e67fe7850d5c333"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111035 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"375101e380f61cd471897c555619b473535593a56b2bdbc73a3521f693f430b8"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111088 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" event={"ID":"0a6820b1-d17b-4bf8-961e-ff96d8e79b72","Type":"ContainerStarted","Data":"95f622a851dd1b4269f4320a8f660acab9ead77ee7958113bb993b73d9e627ca"} Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111848 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111885 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.111998 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.112060 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.128512 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-44bjw" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.153551 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9zjhs" podStartSLOduration=10.153535634 podStartE2EDuration="10.153535634s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.151131453 +0000 UTC m=+145.447236676" watchObservedRunningTime="2026-01-24 06:55:44.153535634 +0000 UTC m=+145.449640857" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.196930 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.201548 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.201765 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.701733697 +0000 UTC m=+145.997838920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.202269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.206227 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.70621023 +0000 UTC m=+146.002315453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.304128 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.304327 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.80429849 +0000 UTC m=+146.100403713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.304391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.304776 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.804761831 +0000 UTC m=+146.100867064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.329538 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xwpc2" podStartSLOduration=126.32951925 podStartE2EDuration="2m6.32951925s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.319761536 +0000 UTC m=+145.615866769" watchObservedRunningTime="2026-01-24 06:55:44.32951925 +0000 UTC m=+145.625624473" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.330958 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" podStartSLOduration=127.330948496 podStartE2EDuration="2m7.330948496s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.264030264 +0000 UTC m=+145.560135487" watchObservedRunningTime="2026-01-24 06:55:44.330948496 +0000 UTC m=+145.627053719" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.405412 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.405605 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.90558153 +0000 UTC m=+146.201686753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.405762 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.406073 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:44.906065362 +0000 UTC m=+146.202170585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.424503 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" podStartSLOduration=126.424488972 podStartE2EDuration="2m6.424488972s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.423068176 +0000 UTC m=+145.719173399" watchObservedRunningTime="2026-01-24 06:55:44.424488972 +0000 UTC m=+145.720594195" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.460130 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rmdpv" podStartSLOduration=126.460112152 podStartE2EDuration="2m6.460112152s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.457832245 +0000 UTC m=+145.753937468" watchObservedRunningTime="2026-01-24 06:55:44.460112152 +0000 UTC m=+145.756217375" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.506865 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.507168 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.007153357 +0000 UTC m=+146.303258580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.525356 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m8cvs" podStartSLOduration=10.525336571 podStartE2EDuration="10.525336571s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.520940611 +0000 UTC m=+145.817045834" watchObservedRunningTime="2026-01-24 06:55:44.525336571 +0000 UTC m=+145.821441794" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.605180 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" podStartSLOduration=127.605162575 podStartE2EDuration="2m7.605162575s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.602740445 +0000 UTC m=+145.898845678" watchObservedRunningTime="2026-01-24 06:55:44.605162575 +0000 UTC m=+145.901267798" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.608646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.608983 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.108970751 +0000 UTC m=+146.405075974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.709946 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.710470 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.210452176 +0000 UTC m=+146.506557399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.724226 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-577lm" podStartSLOduration=126.72421128 podStartE2EDuration="2m6.72421128s" podCreationTimestamp="2026-01-24 06:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:44.635596996 +0000 UTC m=+145.931702219" watchObservedRunningTime="2026-01-24 06:55:44.72421128 +0000 UTC m=+146.020316503" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.811787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.812186 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.312128326 +0000 UTC m=+146.608233549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.908038 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:44 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:44 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:44 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.908340 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.912900 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.913086 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.413060247 +0000 UTC m=+146.709165470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:44 crc kubenswrapper[4675]: I0124 06:55:44.913444 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:44 crc kubenswrapper[4675]: E0124 06:55:44.913783 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.413768915 +0000 UTC m=+146.709874138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.014758 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.015055 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.515038645 +0000 UTC m=+146.811143868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.087816 4675 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7d6k container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.087883 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" podUID="cb716cde-084c-490b-a28f-f35c40c0adbb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.116179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.116518 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.61650204 +0000 UTC m=+146.912607263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.118777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"b377b96e7448287f254edc49511104be79d19d3744866afe64ddeb44ab0a89c4"} Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.119827 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.119881 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.120153 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.120172 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.217581 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.217779 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.717750819 +0000 UTC m=+147.013856042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.218200 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.219151 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.719143143 +0000 UTC m=+147.015248366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.319373 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.319488 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.81947207 +0000 UTC m=+147.115577293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.319782 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.320212 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.820192038 +0000 UTC m=+147.116297261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.338895 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7d6k" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.421343 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.421460 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.921437346 +0000 UTC m=+147.217542569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.421515 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.421861 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:45.921851878 +0000 UTC m=+147.217957101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.490444 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.491362 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.497929 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.522833 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.523024 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.022991344 +0000 UTC m=+147.319096567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523444 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.523568 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.523682 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.023670531 +0000 UTC m=+147.319775754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.625402 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.125370321 +0000 UTC m=+147.421475544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625740 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.625921 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.626589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.626654 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.626930 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.12691294 +0000 UTC m=+147.423018253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.653410 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.690236 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"certified-operators-l7z59\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.700459 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.727802 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.728384 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.228358975 +0000 UTC m=+147.524464208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.737777 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.738907 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.742629 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.805253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838031 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838117 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.838139 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.838415 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.338403933 +0000 UTC m=+147.634509156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.903784 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.904958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.908188 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:45 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:45 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:45 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.908238 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.926287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.940291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.940817 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.440796201 +0000 UTC m=+147.736901434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.940818 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941298 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941512 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941624 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941750 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.941877 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.942010 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.942108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:45 crc kubenswrapper[4675]: E0124 06:55:45.943073 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.443055987 +0000 UTC m=+147.739161210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.943590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.944124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.950797 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.952924 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.953802 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.962235 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.991523 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"community-operators-gmxj8\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:45 crc kubenswrapper[4675]: I0124 06:55:45.991747 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.047526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048102 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048166 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.048193 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.048591 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.548575743 +0000 UTC m=+147.844680966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.049012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.049217 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.057107 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.076493 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"certified-operators-mrxqr\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.084533 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.086818 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.119621 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.127874 4675 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-b7h9m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.127917 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" podUID="8ae6fb5c-09c6-4c88-b4c4-37cfdf235d47" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155802 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155839 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.155889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.156163 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.656152071 +0000 UTC m=+147.952257294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.172958 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.182982 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.213057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"35f8bbea3430ba0b99f480831ab389faf48f9bab4202333bac1e88588d1cd56f"} Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.228263 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257531 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.257565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.258272 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.758256341 +0000 UTC m=+148.054361564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.258989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.259323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.306508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"community-operators-gtt58\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.358333 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.358633 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.858621319 +0000 UTC m=+148.154726542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.433018 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.459419 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.459784 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:46.959766605 +0000 UTC m=+148.255871828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.509146 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.510141 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.548160 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.561133 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.562154 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.062142843 +0000 UTC m=+148.358248066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.566780 4675 patch_prober.go:28] interesting pod/apiserver-76f77b778f-s7phr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]log ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]etcd ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/generic-apiserver-start-informers ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/max-in-flight-filter ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 24 06:55:46 crc kubenswrapper[4675]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/project.openshift.io-projectcache ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/openshift.io-startinformers ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 24 06:55:46 crc kubenswrapper[4675]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 24 06:55:46 crc kubenswrapper[4675]: livez check failed Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.566816 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" podUID="4b956c8c-f12f-4622-b67d-29349ba463aa" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.662332 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.662652 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.162631733 +0000 UTC m=+148.458736956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.731145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-b7h9m" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.767504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.768828 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.268811496 +0000 UTC m=+148.564916719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.860022 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.862951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.869710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.870108 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.370080016 +0000 UTC m=+148.666185239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.881345 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.888658 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.909313 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.923454 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.923581 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959204 4675 patch_prober.go:28] interesting pod/console-f9d7485db-c64jl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959259 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959535 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:46 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:46 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.959584 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.967804 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.977943 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:46 crc kubenswrapper[4675]: E0124 06:55:46.980169 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.480158676 +0000 UTC m=+148.776263899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:46 crc kubenswrapper[4675]: I0124 06:55:46.980760 4675 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080290 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080775 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080812 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.080886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.080959 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.580943884 +0000 UTC m=+148.877049107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.095001 4675 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-24T06:55:46.980786201Z","Handler":null,"Name":""} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.126098 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.176783 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.184810 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.185114 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.685103665 +0000 UTC m=+148.981208888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qkls6" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.213073 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.227607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"ee8ef93d6dbda9d79ddf1313f70a0d90a2db3cc78f034c04f57e14a671da3bf7"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.276077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" event={"ID":"c84c3367-bd13-4a3a-b8d0-c4a9157ee38f","Type":"ContainerStarted","Data":"b9fc134cc520444293aac16ca1f74eff20249b5ab35b473d68cd909c47f97ad9"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.296447 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:47 crc kubenswrapper[4675]: E0124 06:55:47.296815 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-24 06:55:47.796799396 +0000 UTC m=+149.092904619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310013 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72" exitCode=0 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.310097 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerStarted","Data":"207126b350e6a988e2c0611799f1606a299f405d91cfae55c96cd51fac72006a"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.314918 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pn69w" podStartSLOduration=13.314907718 podStartE2EDuration="13.314907718s" podCreationTimestamp="2026-01-24 06:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:47.312255352 +0000 UTC m=+148.608360575" watchObservedRunningTime="2026-01-24 06:55:47.314907718 +0000 UTC m=+148.611012941" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.316593 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.322015 4675 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.322052 4675 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.331062 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01de4ee9121d4c485f2b2face8eb6d72997e41317d1a52edd515d2097882bad2"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.331356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"66922b35ef91651141df068d64d779e1a4692f12bb83833c9d415682d6b8d139"} Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.357546 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:55:47 crc kubenswrapper[4675]: W0124 06:55:47.367173 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69537bd3_d5fe_4baf_a1dc_16c366f2518b.slice/crio-2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739 WatchSource:0}: Error finding container 2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739: Status 404 returned error can't find the container with id 2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739 Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.401413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.437286 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.504632 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.512793 4675 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.512842 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.515269 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.524054 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.537184 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605084 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605115 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.605150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: W0124 06:55:47.632767 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a WatchSource:0}: Error finding container 0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a: Status 404 returned error can't find the container with id 0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.694128 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.699243 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700588 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700603 4675 patch_prober.go:28] interesting pod/downloads-7954f5f757-jntdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700633 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.700657 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jntdn" podUID="4bfd659c-336a-4497-bb5b-eaf18b1118e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706866 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.706954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.707308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.707510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.721889 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.734935 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"redhat-marketplace-f482d\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.764641 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.904884 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906296 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906354 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:47 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:47 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:47 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.906378 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.907258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:47 crc kubenswrapper[4675]: I0124 06:55:47.984108 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010791 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.010809 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.041836 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qkls6\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.111763 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.112934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.113140 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.129511 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"redhat-marketplace-hrjxs\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.153427 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.180258 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.196167 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.280774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.329343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.356936 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.356998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.369107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0d064aea4a33cd49d14d603d4ee871844d37da35ad2cb0962ffbc1070bbceb2a"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.370686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerStarted","Data":"f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.372156 4675 generic.go:334] "Generic (PLEG): container finished" podID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerID="a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.372222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerDied","Data":"a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405620 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.405702 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"bbfda04626fc4ed4b4d8b4cd5fb06a28fc11f612333f3084a7f09445e8606bac"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429253 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" exitCode=0 Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.429357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerStarted","Data":"2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.461375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d4a25aee26f75d07e8f243dc1b721d40717c874d59c69811a98bae5a542f4913"} Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.473857 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjk5f" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.702405 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:55:48 crc kubenswrapper[4675]: W0124 06:55:48.724556 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8bbe037_b253_4db3_b0f5_d02a51ca300e.slice/crio-6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f WatchSource:0}: Error finding container 6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f: Status 404 returned error can't find the container with id 6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.762555 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 06:55:48 crc kubenswrapper[4675]: W0124 06:55:48.769346 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef6caa30_be9c_438c_a494_8b54b5df218c.slice/crio-26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c WatchSource:0}: Error finding container 26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c: Status 404 returned error can't find the container with id 26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.903109 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.904390 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.906865 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.908609 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:48 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:48 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:48 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.908657 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.920181 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.939418 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:48 crc kubenswrapper[4675]: I0124 06:55:48.962175 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.040662 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.041163 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.041202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.089278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"redhat-operators-6vjtj\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.281181 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.282293 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.300607 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.315210 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344701 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.344862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.394494 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.395200 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.402125 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.402653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.406799 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.446267 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.446969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447068 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447655 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.447884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.488937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bdc3723a15e52a9e1243b9b7561aa37625c0f986209cd1ae4107c1f1d1e2cc4b"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.494184 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"redhat-operators-ljvrz\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.494853 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a5a0d1daf3d4293e16f4f912242be249a82cc3d2aee3fe21fab075252eecb4c5"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.495619 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497199 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.497269 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerStarted","Data":"6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.519440 4675 generic.go:334] "Generic (PLEG): container finished" podID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerID="156ee2cddf28e8263cf6105f2729141aefac521498a1d0cf537c8cb286858f52" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.519516 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerDied","Data":"156ee2cddf28e8263cf6105f2729141aefac521498a1d0cf537c8cb286858f52"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.521988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerStarted","Data":"b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.522024 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerStarted","Data":"26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.522268 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.523288 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3" exitCode=0 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.524121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.524140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerStarted","Data":"236fdbdb18d6c63d9e15929a4a294f390be7b67da4280698a6623d3338464d82"} Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.548403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.548439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.549233 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.591342 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.597459 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.618195 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" podStartSLOduration=132.618169066 podStartE2EDuration="2m12.618169066s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:55:49.568068204 +0000 UTC m=+150.864173447" watchObservedRunningTime="2026-01-24 06:55:49.618169066 +0000 UTC m=+150.914274309" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.677239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:55:49 crc kubenswrapper[4675]: W0124 06:55:49.687035 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a336bf_741a_462c_bafd_9ff5e4838956.slice/crio-17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9 WatchSource:0}: Error finding container 17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9: Status 404 returned error can't find the container with id 17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9 Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.714407 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.920890 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:49 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:49 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:49 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.920945 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.953540 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956136 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.956180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") pod \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\" (UID: \"0b4201e4-a1e0-4256-aa5a-67383ee87bee\") " Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.958324 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.992070 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2" (OuterVolumeSpecName: "kube-api-access-67wr2") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "kube-api-access-67wr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:49 crc kubenswrapper[4675]: I0124 06:55:49.992268 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b4201e4-a1e0-4256-aa5a-67383ee87bee" (UID: "0b4201e4-a1e0-4256-aa5a-67383ee87bee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068659 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4201e4-a1e0-4256-aa5a-67383ee87bee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068687 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wr2\" (UniqueName: \"kubernetes.io/projected/0b4201e4-a1e0-4256-aa5a-67383ee87bee-kube-api-access-67wr2\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.068697 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4201e4-a1e0-4256-aa5a-67383ee87bee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.279658 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.298858 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 24 06:55:50 crc kubenswrapper[4675]: W0124 06:55:50.311416 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9cd470_4963_4979_b7f6_50a2969febf8.slice/crio-782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971 WatchSource:0}: Error finding container 782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971: Status 404 returned error can't find the container with id 782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971 Jan 24 06:55:50 crc kubenswrapper[4675]: W0124 06:55:50.312481 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0835f7bf_325d_42e6_bc79_9c65c68ba95e.slice/crio-8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c WatchSource:0}: Error finding container 8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c: Status 404 returned error can't find the container with id 8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532201 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" exitCode=0 Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.532543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543323 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f" exitCode=0 Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.543455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.551033 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerStarted","Data":"8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.568642 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.569307 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59" event={"ID":"0b4201e4-a1e0-4256-aa5a-67383ee87bee","Type":"ContainerDied","Data":"50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48"} Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.569339 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d0cb80aa27ce6cef25c689ef2dda8afc1fb093c0efbca3d65994205d5a3a48" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.769871 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879497 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") pod \"c7f615e0-e12d-48da-8f84-a37b15b77580\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879554 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") pod \"c7f615e0-e12d-48da-8f84-a37b15b77580\" (UID: \"c7f615e0-e12d-48da-8f84-a37b15b77580\") " Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.879902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7f615e0-e12d-48da-8f84-a37b15b77580" (UID: "c7f615e0-e12d-48da-8f84-a37b15b77580"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.896223 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7f615e0-e12d-48da-8f84-a37b15b77580" (UID: "c7f615e0-e12d-48da-8f84-a37b15b77580"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.908518 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:50 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:50 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:50 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.908566 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.981703 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7f615e0-e12d-48da-8f84-a37b15b77580-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:50 crc kubenswrapper[4675]: I0124 06:55:50.981757 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7f615e0-e12d-48da-8f84-a37b15b77580-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.515801 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.523108 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-s7phr" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.583384 4675 generic.go:334] "Generic (PLEG): container finished" podID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerID="3691e01518f49140b2c391402b8660f9fda43695227e8e0f2102580109fb95bd" exitCode=0 Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.583756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerDied","Data":"3691e01518f49140b2c391402b8660f9fda43695227e8e0f2102580109fb95bd"} Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596304 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596377 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c7f615e0-e12d-48da-8f84-a37b15b77580","Type":"ContainerDied","Data":"f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa"} Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.596418 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f51015446de6a0acf88d6eedc44df6fc45c130e0b6934676c9bca0f1932b28fa" Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.906297 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:51 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:51 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:51 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:51 crc kubenswrapper[4675]: I0124 06:55:51.906346 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.877670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.905358 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:52 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:52 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:52 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.905425 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.942688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") pod \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.944345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") pod \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\" (UID: \"0835f7bf-325d-42e6-bc79-9c65c68ba95e\") " Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.944641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0835f7bf-325d-42e6-bc79-9c65c68ba95e" (UID: "0835f7bf-325d-42e6-bc79-9c65c68ba95e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:55:52 crc kubenswrapper[4675]: I0124 06:55:52.962908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0835f7bf-325d-42e6-bc79-9c65c68ba95e" (UID: "0835f7bf-325d-42e6-bc79-9c65c68ba95e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.046058 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.046103 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0835f7bf-325d-42e6-bc79-9c65c68ba95e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.157309 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m8cvs" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610412 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0835f7bf-325d-42e6-bc79-9c65c68ba95e","Type":"ContainerDied","Data":"8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c"} Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610449 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b73efc6e7d667a46e0bf22c42d6682fa8615c1dbec75a0e4be976772c2d449c" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.610514 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.905372 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:53 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:53 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:53 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:53 crc kubenswrapper[4675]: I0124 06:55:53.905424 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:54 crc kubenswrapper[4675]: I0124 06:55:54.905941 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:54 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:54 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:54 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:54 crc kubenswrapper[4675]: I0124 06:55:54.906336 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:55 crc kubenswrapper[4675]: I0124 06:55:55.905450 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:55 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:55 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:55 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:55 crc kubenswrapper[4675]: I0124 06:55:55.905510 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.905764 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:56 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:56 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:56 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.905829 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.915070 4675 patch_prober.go:28] interesting pod/console-f9d7485db-c64jl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 24 06:55:56 crc kubenswrapper[4675]: I0124 06:55:56.915107 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.707037 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jntdn" Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.904675 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:57 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:57 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:57 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:57 crc kubenswrapper[4675]: I0124 06:55:57.904748 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:58 crc kubenswrapper[4675]: I0124 06:55:58.908229 4675 patch_prober.go:28] interesting pod/router-default-5444994796-xwk6j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 24 06:55:58 crc kubenswrapper[4675]: [-]has-synced failed: reason withheld Jan 24 06:55:58 crc kubenswrapper[4675]: [+]process-running ok Jan 24 06:55:58 crc kubenswrapper[4675]: healthz check failed Jan 24 06:55:58 crc kubenswrapper[4675]: I0124 06:55:58.908658 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwk6j" podUID="737c0ee8-629a-4935-8357-c321e1ff5a41" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 24 06:55:59 crc kubenswrapper[4675]: I0124 06:55:59.905946 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:55:59 crc kubenswrapper[4675]: I0124 06:55:59.909221 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xwk6j" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.126870 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.146855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b6e6bdc-02e8-45ac-b89d-caf409ba451e-metrics-certs\") pod \"network-metrics-daemon-8mdgj\" (UID: \"9b6e6bdc-02e8-45ac-b89d-caf409ba451e\") " pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:00 crc kubenswrapper[4675]: I0124 06:56:00.398088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8mdgj" Jan 24 06:56:06 crc kubenswrapper[4675]: I0124 06:56:06.919234 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:56:06 crc kubenswrapper[4675]: I0124 06:56:06.923060 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.207003 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.630238 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:56:08 crc kubenswrapper[4675]: I0124 06:56:08.630573 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.514567 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.515023 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm4jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l7z59_openshift-marketplace(1165063b-e2f9-406a-86c7-0559c419d043): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:14 crc kubenswrapper[4675]: E0124 06:56:14.516188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.409025 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.442574 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.442869 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6m6sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mrxqr_openshift-marketplace(69537bd3-d5fe-4baf-a1dc-16c366f2518b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:15 crc kubenswrapper[4675]: E0124 06:56:15.444290 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" Jan 24 06:56:18 crc kubenswrapper[4675]: I0124 06:56:18.095521 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kwpzk" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.778638 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.795614 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.795764 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz7td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hrjxs_openshift-marketplace(b8bbe037-b253-4db3-b0f5-d02a51ca300e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 24 06:56:19 crc kubenswrapper[4675]: E0124 06:56:19.797044 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" Jan 24 06:56:21 crc kubenswrapper[4675]: E0124 06:56:21.723916 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.182251 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8mdgj"] Jan 24 06:56:22 crc kubenswrapper[4675]: W0124 06:56:22.186431 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b6e6bdc_02e8_45ac_b89d_caf409ba451e.slice/crio-0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a WatchSource:0}: Error finding container 0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a: Status 404 returned error can't find the container with id 0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.927238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.931856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.935162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"0f57f249ebb00e35ae2db683a8d4da8453ba91d502b4752938920acf053e448a"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.936737 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.938271 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.941267 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50" exitCode=0 Jan 24 06:56:22 crc kubenswrapper[4675]: I0124 06:56:22.941327 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.954913 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.954998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.957272 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.957914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.966052 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.966140 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.974308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"c7e67da9454b525d59c4295c5e3eabc9ae1ed649eea092b8f8c6e9549f1859aa"} Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.981928 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" exitCode=0 Jan 24 06:56:23 crc kubenswrapper[4675]: I0124 06:56:23.982064 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} Jan 24 06:56:24 crc kubenswrapper[4675]: I0124 06:56:24.989835 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8mdgj" event={"ID":"9b6e6bdc-02e8-45ac-b89d-caf409ba451e","Type":"ContainerStarted","Data":"64a6bcc42b75551d2745f412f87cf946e31d602bd2f2c60f5bda5bc8334240a1"} Jan 24 06:56:26 crc kubenswrapper[4675]: I0124 06:56:26.452738 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.012416 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8mdgj" podStartSLOduration=170.012395512 podStartE2EDuration="2m50.012395512s" podCreationTimestamp="2026-01-24 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:27.011334927 +0000 UTC m=+188.307440150" watchObservedRunningTime="2026-01-24 06:56:27.012395512 +0000 UTC m=+188.308500735" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226661 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226887 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226898 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226906 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226911 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: E0124 06:56:27.226929 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.226935 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229402 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f615e0-e12d-48da-8f84-a37b15b77580" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229427 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" containerName="collect-profiles" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229441 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0835f7bf-325d-42e6-bc79-9c65c68ba95e" containerName="pruner" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.229838 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.235856 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.235897 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.239760 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.287017 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.287307 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.389758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.412466 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.553809 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:27 crc kubenswrapper[4675]: I0124 06:56:27.934002 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.007515 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerStarted","Data":"c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c"} Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.008893 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerStarted","Data":"c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1"} Jan 24 06:56:28 crc kubenswrapper[4675]: I0124 06:56:28.023939 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f482d" podStartSLOduration=3.074326582 podStartE2EDuration="41.023922702s" podCreationTimestamp="2026-01-24 06:55:47 +0000 UTC" firstStartedPulling="2026-01-24 06:55:49.528490916 +0000 UTC m=+150.824596139" lastFinishedPulling="2026-01-24 06:56:27.478087036 +0000 UTC m=+188.774192259" observedRunningTime="2026-01-24 06:56:28.020741942 +0000 UTC m=+189.316847185" watchObservedRunningTime="2026-01-24 06:56:28.023922702 +0000 UTC m=+189.320027925" Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.035871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerStarted","Data":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.037229 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerStarted","Data":"486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34"} Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.061473 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtt58" podStartSLOduration=3.575059909 podStartE2EDuration="44.061456071s" podCreationTimestamp="2026-01-24 06:55:46 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.407911962 +0000 UTC m=+149.704017185" lastFinishedPulling="2026-01-24 06:56:28.894308124 +0000 UTC m=+190.190413347" observedRunningTime="2026-01-24 06:56:30.05578418 +0000 UTC m=+191.351889403" watchObservedRunningTime="2026-01-24 06:56:30.061456071 +0000 UTC m=+191.357561294" Jan 24 06:56:30 crc kubenswrapper[4675]: I0124 06:56:30.075266 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.075242446 podStartE2EDuration="3.075242446s" podCreationTimestamp="2026-01-24 06:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:30.072650371 +0000 UTC m=+191.368755604" watchObservedRunningTime="2026-01-24 06:56:30.075242446 +0000 UTC m=+191.371347679" Jan 24 06:56:31 crc kubenswrapper[4675]: I0124 06:56:31.049547 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerStarted","Data":"ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f"} Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.061444 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerID="486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34" exitCode=0 Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.062801 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerDied","Data":"486ff467a3dcbf3710daad231b77c4c48f4036fea51d9faa8b991fb420a9aa34"} Jan 24 06:56:32 crc kubenswrapper[4675]: I0124 06:56:32.083899 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmxj8" podStartSLOduration=5.297039704 podStartE2EDuration="47.083882013s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.358666682 +0000 UTC m=+149.654771905" lastFinishedPulling="2026-01-24 06:56:30.145508991 +0000 UTC m=+191.441614214" observedRunningTime="2026-01-24 06:56:32.080326435 +0000 UTC m=+193.376431648" watchObservedRunningTime="2026-01-24 06:56:32.083882013 +0000 UTC m=+193.379987236" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.446279 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465409 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") pod \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465522 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" (UID: "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465564 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") pod \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\" (UID: \"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69\") " Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.465770 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.470925 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" (UID: "8e26c6b2-31e3-46e5-a9ad-e74ffac10e69"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.567495 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e26c6b2-31e3-46e5-a9ad-e74ffac10e69-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.631594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:33 crc kubenswrapper[4675]: E0124 06:56:33.632625 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.632650 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.632793 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e26c6b2-31e3-46e5-a9ad-e74ffac10e69" containerName="pruner" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.633352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.645266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.668955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.669032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.669152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769706 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.769855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.796559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:33 crc kubenswrapper[4675]: I0124 06:56:33.952876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071743 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e26c6b2-31e3-46e5-a9ad-e74ffac10e69","Type":"ContainerDied","Data":"c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071791 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42d4e6d79e6a249b3f19fb30c780fa3f18850b819ae46c1168daf382a770be1" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.071872 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.082522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerStarted","Data":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.096563 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80" exitCode=0 Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.096626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.138629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerStarted","Data":"a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376"} Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.142290 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ljvrz" podStartSLOduration=2.244527604 podStartE2EDuration="45.142276345s" podCreationTimestamp="2026-01-24 06:55:49 +0000 UTC" firstStartedPulling="2026-01-24 06:55:50.535230645 +0000 UTC m=+151.831335868" lastFinishedPulling="2026-01-24 06:56:33.432979386 +0000 UTC m=+194.729084609" observedRunningTime="2026-01-24 06:56:34.10726307 +0000 UTC m=+195.403368303" watchObservedRunningTime="2026-01-24 06:56:34.142276345 +0000 UTC m=+195.438381558" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.168397 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vjtj" podStartSLOduration=3.273827625 podStartE2EDuration="46.168293894s" podCreationTimestamp="2026-01-24 06:55:48 +0000 UTC" firstStartedPulling="2026-01-24 06:55:50.545124572 +0000 UTC m=+151.841229795" lastFinishedPulling="2026-01-24 06:56:33.439590841 +0000 UTC m=+194.735696064" observedRunningTime="2026-01-24 06:56:34.165412732 +0000 UTC m=+195.461517955" watchObservedRunningTime="2026-01-24 06:56:34.168293894 +0000 UTC m=+195.464399127" Jan 24 06:56:34 crc kubenswrapper[4675]: I0124 06:56:34.421276 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.145710 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerStarted","Data":"83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.146056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerStarted","Data":"81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.148990 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" exitCode=0 Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.149043 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.151849 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerStarted","Data":"f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b"} Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.171228 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.171210138 podStartE2EDuration="2.171210138s" podCreationTimestamp="2026-01-24 06:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:56:35.170969632 +0000 UTC m=+196.467074855" watchObservedRunningTime="2026-01-24 06:56:35.171210138 +0000 UTC m=+196.467315351" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.233265 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l7z59" podStartSLOduration=2.937969624 podStartE2EDuration="50.233249679s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:47.316292353 +0000 UTC m=+148.612397576" lastFinishedPulling="2026-01-24 06:56:34.611572408 +0000 UTC m=+195.907677631" observedRunningTime="2026-01-24 06:56:35.229286499 +0000 UTC m=+196.525391732" watchObservedRunningTime="2026-01-24 06:56:35.233249679 +0000 UTC m=+196.529354902" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.805439 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:35 crc kubenswrapper[4675]: I0124 06:56:35.805686 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.058986 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.059033 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.098937 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.159899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerStarted","Data":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.227337 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.228843 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.228879 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.249604 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrxqr" podStartSLOduration=3.952885058 podStartE2EDuration="51.249586648s" podCreationTimestamp="2026-01-24 06:55:45 +0000 UTC" firstStartedPulling="2026-01-24 06:55:48.476617789 +0000 UTC m=+149.772723012" lastFinishedPulling="2026-01-24 06:56:35.773319379 +0000 UTC m=+197.069424602" observedRunningTime="2026-01-24 06:56:36.201787104 +0000 UTC m=+197.497892327" watchObservedRunningTime="2026-01-24 06:56:36.249586648 +0000 UTC m=+197.545691871" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.434669 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.434711 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.471144 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:36 crc kubenswrapper[4675]: I0124 06:56:36.860316 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:36 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:36 crc kubenswrapper[4675]: > Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.213318 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.276016 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:37 crc kubenswrapper[4675]: > Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.905676 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.905760 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:37 crc kubenswrapper[4675]: I0124 06:56:37.947354 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.209316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.518081 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630239 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630304 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630349 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.630902 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 06:56:38 crc kubenswrapper[4675]: I0124 06:56:38.631005 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" gracePeriod=600 Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.177858 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" exitCode=0 Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.177966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82"} Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.316543 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.316582 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.598374 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:39 crc kubenswrapper[4675]: I0124 06:56:39.598783 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.185200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.185319 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtt58" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" containerID="cri-o://acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" gracePeriod=2 Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.362018 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:40 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:40 crc kubenswrapper[4675]: > Jan 24 06:56:40 crc kubenswrapper[4675]: I0124 06:56:40.639682 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ljvrz" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" probeResult="failure" output=< Jan 24 06:56:40 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 06:56:40 crc kubenswrapper[4675]: > Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.020324 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.086867 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.086973 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.087013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") pod \"06c92a2c-0b68-4b8f-92b3-9688aef50674\" (UID: \"06c92a2c-0b68-4b8f-92b3-9688aef50674\") " Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.087848 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities" (OuterVolumeSpecName: "utilities") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.092939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj" (OuterVolumeSpecName: "kube-api-access-z6rbj") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "kube-api-access-z6rbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.142840 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06c92a2c-0b68-4b8f-92b3-9688aef50674" (UID: "06c92a2c-0b68-4b8f-92b3-9688aef50674"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188490 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188528 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6rbj\" (UniqueName: \"kubernetes.io/projected/06c92a2c-0b68-4b8f-92b3-9688aef50674-kube-api-access-z6rbj\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.188543 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06c92a2c-0b68-4b8f-92b3-9688aef50674-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.196347 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" exitCode=0 Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.196411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198539 4675 generic.go:334] "Generic (PLEG): container finished" podID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" exitCode=0 Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198614 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198623 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtt58" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtt58" event={"ID":"06c92a2c-0b68-4b8f-92b3-9688aef50674","Type":"ContainerDied","Data":"bbfda04626fc4ed4b4d8b4cd5fb06a28fc11f612333f3084a7f09445e8606bac"} Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.198661 4675 scope.go:117] "RemoveContainer" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.223144 4675 scope.go:117] "RemoveContainer" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.236141 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.240869 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtt58"] Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.246107 4675 scope.go:117] "RemoveContainer" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.264658 4675 scope.go:117] "RemoveContainer" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.265065 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": container with ID starting with acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a not found: ID does not exist" containerID="acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.265112 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a"} err="failed to get container status \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": rpc error: code = NotFound desc = could not find container \"acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a\": container with ID starting with acff3e0872f2cf9fa049ab5c851c65a18998e2fda439854f71551e089722558a not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.265137 4675 scope.go:117] "RemoveContainer" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.269755 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": container with ID starting with 566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc not found: ID does not exist" containerID="566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.269790 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc"} err="failed to get container status \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": rpc error: code = NotFound desc = could not find container \"566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc\": container with ID starting with 566ec93a91ca2be068615b55d0d4ca8a35580c2efe8a064a81d4a4f17e396bdc not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.269811 4675 scope.go:117] "RemoveContainer" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: E0124 06:56:42.270161 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": container with ID starting with c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1 not found: ID does not exist" containerID="c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.270239 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1"} err="failed to get container status \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": rpc error: code = NotFound desc = could not find container \"c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1\": container with ID starting with c4a901556efcef5757c924e0f8e71c7b1cc9acc0f0688b5021d10eb364530ed1 not found: ID does not exist" Jan 24 06:56:42 crc kubenswrapper[4675]: I0124 06:56:42.948474 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" path="/var/lib/kubelet/pods/06c92a2c-0b68-4b8f-92b3-9688aef50674/volumes" Jan 24 06:56:43 crc kubenswrapper[4675]: I0124 06:56:43.204317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerStarted","Data":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} Jan 24 06:56:43 crc kubenswrapper[4675]: I0124 06:56:43.225384 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrjxs" podStartSLOduration=3.070550489 podStartE2EDuration="56.225362652s" podCreationTimestamp="2026-01-24 06:55:47 +0000 UTC" firstStartedPulling="2026-01-24 06:55:49.499115242 +0000 UTC m=+150.795220465" lastFinishedPulling="2026-01-24 06:56:42.653927405 +0000 UTC m=+203.950032628" observedRunningTime="2026-01-24 06:56:43.222987773 +0000 UTC m=+204.519092996" watchObservedRunningTime="2026-01-24 06:56:43.225362652 +0000 UTC m=+204.521467875" Jan 24 06:56:45 crc kubenswrapper[4675]: I0124 06:56:45.856573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:45 crc kubenswrapper[4675]: I0124 06:56:45.889816 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.278362 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.317444 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:46 crc kubenswrapper[4675]: I0124 06:56:46.718817 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.236649 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrxqr" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" containerID="cri-o://28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" gracePeriod=2 Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.281068 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.281117 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.337113 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.803633 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.867147 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.867402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.868072 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities" (OuterVolumeSpecName: "utilities") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.883614 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc" (OuterVolumeSpecName: "kube-api-access-6m6sc") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "kube-api-access-6m6sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.967790 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") pod \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\" (UID: \"69537bd3-d5fe-4baf-a1dc-16c366f2518b\") " Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.968090 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6sc\" (UniqueName: \"kubernetes.io/projected/69537bd3-d5fe-4baf-a1dc-16c366f2518b-kube-api-access-6m6sc\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:48 crc kubenswrapper[4675]: I0124 06:56:48.968118 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.019563 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69537bd3-d5fe-4baf-a1dc-16c366f2518b" (UID: "69537bd3-d5fe-4baf-a1dc-16c366f2518b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.069149 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69537bd3-d5fe-4baf-a1dc-16c366f2518b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.247829 4675 generic.go:334] "Generic (PLEG): container finished" podID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" exitCode=0 Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248000 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrxqr" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrxqr" event={"ID":"69537bd3-d5fe-4baf-a1dc-16c366f2518b","Type":"ContainerDied","Data":"2ae5b1917c1a3d53b8698eeecb43dd8b9b50c129480e9082905b7fc129076739"} Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.248269 4675 scope.go:117] "RemoveContainer" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.274007 4675 scope.go:117] "RemoveContainer" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.303429 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.309784 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mrxqr"] Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.314034 4675 scope.go:117] "RemoveContainer" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.314214 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.341833 4675 scope.go:117] "RemoveContainer" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342310 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": container with ID starting with 28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe not found: ID does not exist" containerID="28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342353 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe"} err="failed to get container status \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": rpc error: code = NotFound desc = could not find container \"28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe\": container with ID starting with 28fd0cae35d35cfec6231c2678b4017783ee56a42c26829d5c41e57fd2b123fe not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342377 4675 scope.go:117] "RemoveContainer" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342659 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": container with ID starting with eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc not found: ID does not exist" containerID="eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342680 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc"} err="failed to get container status \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": rpc error: code = NotFound desc = could not find container \"eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc\": container with ID starting with eaf7fc7954170de008a0897cff13a88c4210c6d8394ad3ddc713044f093e59dc not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342691 4675 scope.go:117] "RemoveContainer" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: E0124 06:56:49.342913 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": container with ID starting with e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429 not found: ID does not exist" containerID="e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.342934 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429"} err="failed to get container status \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": rpc error: code = NotFound desc = could not find container \"e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429\": container with ID starting with e448c568a7e9b671282e212dde37f5911db297b402216f3793f4813de4ba5429 not found: ID does not exist" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.372128 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.415380 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.635218 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:49 crc kubenswrapper[4675]: I0124 06:56:49.686609 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:50 crc kubenswrapper[4675]: I0124 06:56:50.949096 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" path="/var/lib/kubelet/pods/69537bd3-d5fe-4baf-a1dc-16c366f2518b/volumes" Jan 24 06:56:51 crc kubenswrapper[4675]: I0124 06:56:51.116393 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:51 crc kubenswrapper[4675]: I0124 06:56:51.261834 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrjxs" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" containerID="cri-o://be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" gracePeriod=2 Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.145410 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267834 4675 generic.go:334] "Generic (PLEG): container finished" podID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" exitCode=0 Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267884 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrjxs" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267891 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267928 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrjxs" event={"ID":"b8bbe037-b253-4db3-b0f5-d02a51ca300e","Type":"ContainerDied","Data":"6ebfde1b827bc87808d9c9e67a4bac6a8b9d56cf9e02df2ef1e3cfa3213e6b8f"} Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.267944 4675 scope.go:117] "RemoveContainer" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.284025 4675 scope.go:117] "RemoveContainer" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303696 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303749 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.303853 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") pod \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\" (UID: \"b8bbe037-b253-4db3-b0f5-d02a51ca300e\") " Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.304978 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities" (OuterVolumeSpecName: "utilities") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.306824 4675 scope.go:117] "RemoveContainer" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.309641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td" (OuterVolumeSpecName: "kube-api-access-rz7td") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "kube-api-access-rz7td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.325134 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz7td\" (UniqueName: \"kubernetes.io/projected/b8bbe037-b253-4db3-b0f5-d02a51ca300e-kube-api-access-rz7td\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.325163 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339063 4675 scope.go:117] "RemoveContainer" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.339478 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": container with ID starting with be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d not found: ID does not exist" containerID="be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339573 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d"} err="failed to get container status \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": rpc error: code = NotFound desc = could not find container \"be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d\": container with ID starting with be7e72e03eb24cd10427c4f4dcccd71d3c6f88116361ce3229f795c190f0af4d not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339660 4675 scope.go:117] "RemoveContainer" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.339953 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": container with ID starting with aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf not found: ID does not exist" containerID="aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339972 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf"} err="failed to get container status \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": rpc error: code = NotFound desc = could not find container \"aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf\": container with ID starting with aba66f99a84db6316f85686f0ae6b4f98b7ac559983be3b9c6a0a0fa4109dedf not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.339985 4675 scope.go:117] "RemoveContainer" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: E0124 06:56:52.340265 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": container with ID starting with 0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a not found: ID does not exist" containerID="0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.340344 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a"} err="failed to get container status \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": rpc error: code = NotFound desc = could not find container \"0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a\": container with ID starting with 0c24871f95d03e8477dd9e320e7a089249ca2e0bb75aeef1e31fa7bd3868631a not found: ID does not exist" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.367299 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8bbe037-b253-4db3-b0f5-d02a51ca300e" (UID: "b8bbe037-b253-4db3-b0f5-d02a51ca300e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.426660 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8bbe037-b253-4db3-b0f5-d02a51ca300e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.596762 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.597889 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrjxs"] Jan 24 06:56:52 crc kubenswrapper[4675]: I0124 06:56:52.951262 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" path="/var/lib/kubelet/pods/b8bbe037-b253-4db3-b0f5-d02a51ca300e/volumes" Jan 24 06:56:53 crc kubenswrapper[4675]: I0124 06:56:53.317881 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:53 crc kubenswrapper[4675]: I0124 06:56:53.318241 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ljvrz" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" containerID="cri-o://58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" gracePeriod=2 Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.163974 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249053 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249126 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.249174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") pod \"bb9cd470-4963-4979-b7f6-50a2969febf8\" (UID: \"bb9cd470-4963-4979-b7f6-50a2969febf8\") " Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.250118 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities" (OuterVolumeSpecName: "utilities") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.258369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9" (OuterVolumeSpecName: "kube-api-access-7j2j9") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "kube-api-access-7j2j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286475 4675 generic.go:334] "Generic (PLEG): container finished" podID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" exitCode=0 Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286539 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljvrz" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljvrz" event={"ID":"bb9cd470-4963-4979-b7f6-50a2969febf8","Type":"ContainerDied","Data":"782af23f4c0ebc3cdc7f1ab86c6d561da3e3393c3b0327b4b3a743335cbd6971"} Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.286571 4675 scope.go:117] "RemoveContainer" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.302150 4675 scope.go:117] "RemoveContainer" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.317446 4675 scope.go:117] "RemoveContainer" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335613 4675 scope.go:117] "RemoveContainer" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.335930 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": container with ID starting with 58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc not found: ID does not exist" containerID="58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335953 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc"} err="failed to get container status \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": rpc error: code = NotFound desc = could not find container \"58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc\": container with ID starting with 58131e6d4d1234bc031df0f0991e0402a6b0f987ca08e6c697d93f01ae2de8cc not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.335973 4675 scope.go:117] "RemoveContainer" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.336138 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": container with ID starting with 117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f not found: ID does not exist" containerID="117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336154 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f"} err="failed to get container status \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": rpc error: code = NotFound desc = could not find container \"117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f\": container with ID starting with 117a2fbb6423b1030d48b793233ed82187f3e08c52c6c10d09eb2eb497b3cc0f not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336166 4675 scope.go:117] "RemoveContainer" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: E0124 06:56:54.336315 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": container with ID starting with 28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0 not found: ID does not exist" containerID="28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.336329 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0"} err="failed to get container status \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": rpc error: code = NotFound desc = could not find container \"28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0\": container with ID starting with 28a5aaf59927b69524c698be2bed75b24b6477685b874d96ad11153efaeaf5a0 not found: ID does not exist" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.350002 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j2j9\" (UniqueName: \"kubernetes.io/projected/bb9cd470-4963-4979-b7f6-50a2969febf8-kube-api-access-7j2j9\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.350024 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.369207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb9cd470-4963-4979-b7f6-50a2969febf8" (UID: "bb9cd470-4963-4979-b7f6-50a2969febf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.451528 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9cd470-4963-4979-b7f6-50a2969febf8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.619414 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.625849 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ljvrz"] Jan 24 06:56:54 crc kubenswrapper[4675]: I0124 06:56:54.951135 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" path="/var/lib/kubelet/pods/bb9cd470-4963-4979-b7f6-50a2969febf8/volumes" Jan 24 06:56:56 crc kubenswrapper[4675]: I0124 06:56:56.173903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.341550 4675 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342651 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342688 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342783 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342637 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.342757 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" gracePeriod=15 Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344301 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344677 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344735 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344750 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344762 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344810 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344823 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344842 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344852 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344892 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344904 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344921 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344932 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344971 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.344982 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.344998 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345009 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345021 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345057 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345070 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345081 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345095 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345144 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345155 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345166 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345177 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345192 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345202 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="extract-content" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345238 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345249 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345263 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345273 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345287 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345338 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345349 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="extract-utilities" Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.345362 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345373 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345609 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345629 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345663 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bbe037-b253-4db3-b0f5-d02a51ca300e" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345682 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345700 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9cd470-4963-4979-b7f6-50a2969febf8" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345714 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="69537bd3-d5fe-4baf-a1dc-16c366f2518b" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345759 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345775 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.345789 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c92a2c-0b68-4b8f-92b3-9688aef50674" containerName="registry-server" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.346146 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.347934 4675 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.348650 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.354172 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.392624 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424204 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424228 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424278 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.424428 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.525836 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526325 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526373 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526574 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526639 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526645 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526752 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.526843 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: I0124 06:57:12.693885 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:12 crc kubenswrapper[4675]: W0124 06:57:12.733088 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7 WatchSource:0}: Error finding container 294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7: Status 404 returned error can't find the container with id 294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7 Jan 24 06:57:12 crc kubenswrapper[4675]: E0124 06:57:12.739991 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.68:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d986f5f7bfb49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,LastTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.409819 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.411540 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412570 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412602 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412609 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412616 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" exitCode=2 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.412660 4675 scope.go:117] "RemoveContainer" containerID="e21cea7902edd4c99226e343e188a73f8ef020eadccb9548cf0b9ca551332d3b" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.416039 4675 generic.go:334] "Generic (PLEG): container finished" podID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerID="83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb" exitCode=0 Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.416099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerDied","Data":"83da16f3181e756a8f5edd35c942b704ad650f66f9209eff0332ba37056e3ddb"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.417125 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.417628 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"294225f5b89dbcf1f3dfebde2c7ac7b8846f3de38ae134235387475511f58ca7"} Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.418772 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.419143 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.968468 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.969345 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.969694 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970170 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970417 4675 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:13 crc kubenswrapper[4675]: I0124 06:57:13.970442 4675 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.970781 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="200ms" Jan 24 06:57:13 crc kubenswrapper[4675]: E0124 06:57:13.987078 4675 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" volumeName="registry-storage" Jan 24 06:57:14 crc kubenswrapper[4675]: E0124 06:57:14.172273 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="400ms" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.486671 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:14 crc kubenswrapper[4675]: E0124 06:57:14.572910 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="800ms" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.782441 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.783668 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.784082 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857671 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") pod \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\" (UID: \"9d4aab5c-f99b-43e8-84b3-6ced30ef8023\") " Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.857900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock" (OuterVolumeSpecName: "var-lock") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.858060 4675 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.858789 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.865564 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9d4aab5c-f99b-43e8-84b3-6ced30ef8023" (UID: "9d4aab5c-f99b-43e8-84b3-6ced30ef8023"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.958813 4675 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:14 crc kubenswrapper[4675]: I0124 06:57:14.958849 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d4aab5c-f99b-43e8-84b3-6ced30ef8023-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.255327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.256581 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257184 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257545 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.257931 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364327 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364530 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.364651 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365308 4675 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365345 4675 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.365362 4675 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.374944 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="1.6s" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.503366 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504129 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" exitCode=0 Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504198 4675 scope.go:117] "RemoveContainer" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.504201 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.508922 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.509006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d4aab5c-f99b-43e8-84b3-6ced30ef8023","Type":"ContainerDied","Data":"81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56"} Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.509050 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81112f58abc96693bfdadaf20511335609d3187604257dd97b45e9ebeca9ec56" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.512751 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.513064 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.513253 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.521029 4675 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.522909 4675 scope.go:117] "RemoveContainer" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.523094 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.523362 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.540460 4675 scope.go:117] "RemoveContainer" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.556446 4675 scope.go:117] "RemoveContainer" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.573701 4675 scope.go:117] "RemoveContainer" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.595891 4675 scope.go:117] "RemoveContainer" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.614912 4675 scope.go:117] "RemoveContainer" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.615537 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": container with ID starting with 3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c not found: ID does not exist" containerID="3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.615604 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c"} err="failed to get container status \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": rpc error: code = NotFound desc = could not find container \"3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c\": container with ID starting with 3d04a9712d02a92ba1079c70179b74248d8701eeb97d06a089bc82d26a265b6c not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.615633 4675 scope.go:117] "RemoveContainer" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.616147 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": container with ID starting with f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f not found: ID does not exist" containerID="f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616185 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f"} err="failed to get container status \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": rpc error: code = NotFound desc = could not find container \"f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f\": container with ID starting with f95bcad50786e329a924cb224b79e730515369a421dd00f04f4f2c0d5127d12f not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616218 4675 scope.go:117] "RemoveContainer" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.616560 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": container with ID starting with 2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc not found: ID does not exist" containerID="2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616596 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc"} err="failed to get container status \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": rpc error: code = NotFound desc = could not find container \"2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc\": container with ID starting with 2562f8cd218d4442c87f21bf6224872481c2b7fab68058dbce3537aa1a3064cc not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.616621 4675 scope.go:117] "RemoveContainer" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.617096 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": container with ID starting with ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76 not found: ID does not exist" containerID="ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617124 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76"} err="failed to get container status \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": rpc error: code = NotFound desc = could not find container \"ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76\": container with ID starting with ba5ec94fa3383764a11b9ef43c6a3748cced9ffcd2c52a3b72c62ad94f972c76 not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617141 4675 scope.go:117] "RemoveContainer" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.617500 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": container with ID starting with 84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63 not found: ID does not exist" containerID="84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617537 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63"} err="failed to get container status \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": rpc error: code = NotFound desc = could not find container \"84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63\": container with ID starting with 84813f4aaa1bd8f66a62b0c79e429702d79e14b854a406302ca78ca37610cd63 not found: ID does not exist" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.617553 4675 scope.go:117] "RemoveContainer" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: E0124 06:57:15.618853 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": container with ID starting with 7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993 not found: ID does not exist" containerID="7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993" Jan 24 06:57:15 crc kubenswrapper[4675]: I0124 06:57:15.618888 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993"} err="failed to get container status \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": rpc error: code = NotFound desc = could not find container \"7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993\": container with ID starting with 7f65dd631a8014ba0ebd9f9760b68206548545cc0f5b4487364b066c66d89993 not found: ID does not exist" Jan 24 06:57:16 crc kubenswrapper[4675]: I0124 06:57:16.949700 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 24 06:57:16 crc kubenswrapper[4675]: E0124 06:57:16.979077 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="3.2s" Jan 24 06:57:18 crc kubenswrapper[4675]: I0124 06:57:18.946738 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:18 crc kubenswrapper[4675]: I0124 06:57:18.947362 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:20 crc kubenswrapper[4675]: E0124 06:57:20.179877 4675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.68:6443: connect: connection refused" interval="6.4s" Jan 24 06:57:20 crc kubenswrapper[4675]: E0124 06:57:20.630274 4675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.68:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d986f5f7bfb49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,LastTimestamp:2026-01-24 06:57:12.737225545 +0000 UTC m=+234.033330808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.213119 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" containerID="cri-o://983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" gracePeriod=15 Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.554068 4675 generic.go:334] "Generic (PLEG): container finished" podID="00c16501-712c-4b60-a231-2a64e34ba677" containerID="983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" exitCode=0 Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.554442 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerDied","Data":"983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9"} Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.621044 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.621882 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.622384 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.622607 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.743968 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744052 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744121 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744185 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744207 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744222 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744236 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744255 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744271 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744295 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744320 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") pod \"00c16501-712c-4b60-a231-2a64e34ba677\" (UID: \"00c16501-712c-4b60-a231-2a64e34ba677\") " Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.744540 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745645 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745678 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.745768 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.750901 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751285 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751764 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.751966 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc" (OuterVolumeSpecName: "kube-api-access-z9pvc") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "kube-api-access-z9pvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.752880 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.754308 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.756935 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.757441 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.757661 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "00c16501-712c-4b60-a231-2a64e34ba677" (UID: "00c16501-712c-4b60-a231-2a64e34ba677"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846180 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846250 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846266 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846284 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846300 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846315 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846329 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9pvc\" (UniqueName: \"kubernetes.io/projected/00c16501-712c-4b60-a231-2a64e34ba677-kube-api-access-z9pvc\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846343 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846358 4675 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846372 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846386 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846398 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846412 4675 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00c16501-712c-4b60-a231-2a64e34ba677-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:21 crc kubenswrapper[4675]: I0124 06:57:21.846423 4675 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00c16501-712c-4b60-a231-2a64e34ba677-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.560962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" event={"ID":"00c16501-712c-4b60-a231-2a64e34ba677","Type":"ContainerDied","Data":"1baee155ca04c86836e94a8a309af90387ef167a0b3873a1f4bc0c4361aabb7d"} Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561026 4675 scope.go:117] "RemoveContainer" containerID="983c342a8cd6c22283e9b1583e5c4c4bb605f159419d665477ec69b008cd9cf9" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561034 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561729 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.561965 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.562196 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.576835 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.577341 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.577577 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.941976 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.943269 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.944100 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.946674 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.968852 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.968927 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:22 crc kubenswrapper[4675]: E0124 06:57:22.969612 4675 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:22 crc kubenswrapper[4675]: I0124 06:57:22.970288 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:23 crc kubenswrapper[4675]: W0124 06:57:23.000885 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713 WatchSource:0}: Error finding container 503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713: Status 404 returned error can't find the container with id 503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713 Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570312 4675 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="007ada23f704ba5fbd4c2f68fa01a4bb3d3db9b89f7f0d81285e468c43163e13" exitCode=0 Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570743 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"007ada23f704ba5fbd4c2f68fa01a4bb3d3db9b89f7f0d81285e468c43163e13"} Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.570947 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"503c1090e6110d3d7af8af19f312a4fd01bc634c48279f01b41c4b04811e0713"} Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.571252 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.571267 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:23 crc kubenswrapper[4675]: E0124 06:57:23.572063 4675 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.572499 4675 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.573943 4675 status_manager.go:851] "Failed to get status for pod" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:23 crc kubenswrapper[4675]: I0124 06:57:23.574706 4675 status_manager.go:851] "Failed to get status for pod" podUID="00c16501-712c-4b60-a231-2a64e34ba677" pod="openshift-authentication/oauth-openshift-558db77b4-cnnh9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cnnh9\": dial tcp 38.129.56.68:6443: connect: connection refused" Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0af2a8ccaa93ccaa1c5fb434d0a748434a5f9f1c465d715413ff3b04c0987d59"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603293 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68bc94f5de5bed6164773f05d00d486d1b89c2d89bd74bda82a43ee4512c8703"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef178a185e52148e5c49caeab757ec5a1efa43677998a4493f6305c1fae100ba"} Jan 24 06:57:24 crc kubenswrapper[4675]: I0124 06:57:24.603326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d1a6191fa655b8a433ded86c6d81e85a609a9cf0f552ebbb9578065043d24893"} Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.610821 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a461428d95a1131ff86c18a1dbbdcea583aa8b75e845c4d8415c5303c28b6a0e"} Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611347 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611359 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:25 crc kubenswrapper[4675]: I0124 06:57:25.611575 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623536 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623598 4675 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" exitCode=1 Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.623626 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a"} Jan 24 06:57:26 crc kubenswrapper[4675]: I0124 06:57:26.624072 4675 scope.go:117] "RemoveContainer" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.638823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.638937 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.970775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.970858 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:27 crc kubenswrapper[4675]: I0124 06:57:27.979856 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.620230 4675 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.655580 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.655614 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.659882 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:30 crc kubenswrapper[4675]: I0124 06:57:30.726241 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="60a60daf-8592-40a0-bd7d-188a46a49628" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.443223 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.660358 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.660390 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:31 crc kubenswrapper[4675]: I0124 06:57:31.663556 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="60a60daf-8592-40a0-bd7d-188a46a49628" Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842295 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:32 crc kubenswrapper[4675]: I0124 06:57:32.842362 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:36 crc kubenswrapper[4675]: I0124 06:57:36.868838 4675 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.075378 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.380774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 24 06:57:37 crc kubenswrapper[4675]: I0124 06:57:37.401064 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.219520 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.264936 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.327220 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.344840 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.390782 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.409233 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.453496 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.519947 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.607520 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.642490 4675 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.643450 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=26.643433783 podStartE2EDuration="26.643433783s" podCreationTimestamp="2026-01-24 06:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:30.692585236 +0000 UTC m=+251.988690479" watchObservedRunningTime="2026-01-24 06:57:38.643433783 +0000 UTC m=+259.939539016" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648164 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cnnh9","openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648227 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648631 4675 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.648662 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="44ba90f6-bfee-4e2b-8f89-c43235412e6c" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.654149 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.666076 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.698193 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=8.69817049 podStartE2EDuration="8.69817049s" podCreationTimestamp="2026-01-24 06:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:38.6725309 +0000 UTC m=+259.968636153" watchObservedRunningTime="2026-01-24 06:57:38.69817049 +0000 UTC m=+259.994275753" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.886230 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.892413 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 24 06:57:38 crc kubenswrapper[4675]: I0124 06:57:38.949245 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c16501-712c-4b60-a231-2a64e34ba677" path="/var/lib/kubelet/pods/00c16501-712c-4b60-a231-2a64e34ba677/volumes" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.461250 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.700748 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.789433 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 24 06:57:39 crc kubenswrapper[4675]: I0124 06:57:39.816653 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.056229 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.146376 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.226613 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.328237 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.341798 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.494660 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:57:40 crc kubenswrapper[4675]: I0124 06:57:40.574429 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.156815 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.209972 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.491151 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.494515 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.914537 4675 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.915063 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" gracePeriod=5 Jan 24 06:57:41 crc kubenswrapper[4675]: I0124 06:57:41.981623 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.051668 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.176774 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.232028 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.262183 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.343656 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.434394 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.526570 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.560133 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.637057 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.717518 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.806598 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.842778 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.842868 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.873114 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 24 06:57:42 crc kubenswrapper[4675]: I0124 06:57:42.918293 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.055343 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.063962 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.078484 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.164507 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.256015 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.310505 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.399784 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 24 06:57:43 crc kubenswrapper[4675]: I0124 06:57:43.594656 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.126910 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.189964 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.260417 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.367790 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.421017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.492656 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.667946 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 24 06:57:44 crc kubenswrapper[4675]: I0124 06:57:44.780174 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.093017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.176497 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.211104 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.260151 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.356929 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.388108 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.411240 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.419418 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 24 06:57:45 crc kubenswrapper[4675]: I0124 06:57:45.728650 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.088082 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.114504 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.215675 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.302101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.380357 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.384617 4675 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.447442 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.691349 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.829440 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 24 06:57:46 crc kubenswrapper[4675]: I0124 06:57:46.993904 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.034249 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.134465 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.152090 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.204062 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.273554 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.323823 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.417584 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.456846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.480018 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.480088 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.515160 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.616869 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.616929 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617007 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617045 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617077 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617138 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617205 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617292 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617445 4675 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617457 4675 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617468 4675 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.617476 4675 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.630575 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.651785 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.718456 4675 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.767964 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768018 4675 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" exitCode=137 Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768066 4675 scope.go:117] "RemoveContainer" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.768150 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.787264 4675 scope.go:117] "RemoveContainer" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: E0124 06:57:47.787735 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": container with ID starting with f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60 not found: ID does not exist" containerID="f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.787782 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60"} err="failed to get container status \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": rpc error: code = NotFound desc = could not find container \"f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60\": container with ID starting with f4ae6696c855d872af73f58b6a5062a0492c391a848be6a6c41c62be8d83fc60 not found: ID does not exist" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.810358 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.818244 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.962212 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 24 06:57:47 crc kubenswrapper[4675]: I0124 06:57:47.971683 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.152044 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.180523 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.261176 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.274835 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.298891 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.343598 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.440533 4675 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.492826 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.649122 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.705266 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.721381 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.856559 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.920936 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.948038 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.950177 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.950566 4675 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.961359 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.961396 4675 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8c295437-e946-4237-8285-f1bbed0e47e1" Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.965666 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 24 06:57:48 crc kubenswrapper[4675]: I0124 06:57:48.965689 4675 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8c295437-e946-4237-8285-f1bbed0e47e1" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.087851 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.111140 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.142748 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.247356 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.331217 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.409658 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.461428 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.753690 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.796948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 24 06:57:49 crc kubenswrapper[4675]: I0124 06:57:49.866582 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.056542 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.084147 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.105221 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.180206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.227945 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.229831 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.261663 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.266371 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.432139 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.496312 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.689303 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.716384 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.887017 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.900381 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.953075 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 24 06:57:50 crc kubenswrapper[4675]: I0124 06:57:50.994394 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.081525 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.317759 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.393910 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.403659 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.560618 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.702826 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.759279 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.786680 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.790513 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.822501 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.824544 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.905322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.928692 4675 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:51 crc kubenswrapper[4675]: I0124 06:57:51.961520 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.011013 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.037796 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.061198 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.146196 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.223806 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.347196 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.361404 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.402853 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.514006 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.588532 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.640922 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.694810 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.799565 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842797 4675 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842871 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.842959 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.844009 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.844269 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297" gracePeriod=30 Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.856522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858273 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.858677 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858805 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.858906 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.858997 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: E0124 06:57:52.859078 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859148 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859340 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c16501-712c-4b60-a231-2a64e34ba677" containerName="oauth-openshift" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859446 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.859531 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4aab5c-f99b-43e8-84b3-6ced30ef8023" containerName="installer" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.860052 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863270 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863528 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.863774 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.865442 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.869701 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871203 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871480 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871680 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.871930 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.874176 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.875505 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.875652 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.877026 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.879658 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.883908 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.891796 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974510 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974594 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974688 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974793 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974825 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974911 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974962 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.974999 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.975039 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.975069 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:52 crc kubenswrapper[4675]: I0124 06:57:52.997571 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.015967 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076688 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076787 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076860 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076906 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.076982 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077013 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077043 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077123 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077151 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.077174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.078516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-dir\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.079444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-audit-policies\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.079970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.080253 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.080705 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.085118 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.085884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-error\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.086986 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.088877 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.090078 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.090516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-user-template-login\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.091772 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.092354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/819880a6-f27d-4aab-9e8d-16326b87fcfc-v4-0-config-system-session\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.102817 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rg42\" (UniqueName: \"kubernetes.io/projected/819880a6-f27d-4aab-9e8d-16326b87fcfc-kube-api-access-6rg42\") pod \"oauth-openshift-6f67d677dd-tb9pj\" (UID: \"819880a6-f27d-4aab-9e8d-16326b87fcfc\") " pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.103564 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.192233 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.208357 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.276703 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.322855 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.323140 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.370138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.400180 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.415404 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.433104 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.480909 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.521816 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.525948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.533838 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.596811 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f67d677dd-tb9pj"] Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.667013 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.799524 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.805918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" event={"ID":"819880a6-f27d-4aab-9e8d-16326b87fcfc","Type":"ContainerStarted","Data":"13465081a3a7b803b246ab1f6dd1b4186091f02257ac52b0ca329eaba9924e0a"} Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.858829 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.898441 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 24 06:57:53 crc kubenswrapper[4675]: I0124 06:57:53.958913 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.051917 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.185115 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.300811 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.314800 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.405935 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.494361 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.529467 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.573038 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.614979 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.716584 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.817312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" event={"ID":"819880a6-f27d-4aab-9e8d-16326b87fcfc","Type":"ContainerStarted","Data":"6da88ca578b8f1849119c62d75920fb05e26f58d7434c56ef53cff8e9457d522"} Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.817669 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.829465 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.831539 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.835584 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.841352 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.857459 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f67d677dd-tb9pj" podStartSLOduration=58.857439062 podStartE2EDuration="58.857439062s" podCreationTimestamp="2026-01-24 06:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:57:54.851977722 +0000 UTC m=+276.148083015" watchObservedRunningTime="2026-01-24 06:57:54.857439062 +0000 UTC m=+276.153544295" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.894351 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.913046 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 24 06:57:54 crc kubenswrapper[4675]: I0124 06:57:54.951012 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.038794 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.186435 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.386558 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.513780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.569769 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.601208 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.711647 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.736568 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.766821 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.782865 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 24 06:57:55 crc kubenswrapper[4675]: I0124 06:57:55.987610 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.015753 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.040886 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.081595 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.142963 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.521889 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.534348 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.680903 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 24 06:57:56 crc kubenswrapper[4675]: I0124 06:57:56.981224 4675 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.019627 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.090539 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.270510 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.295891 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.338755 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.470580 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.560229 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.673313 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 24 06:57:57 crc kubenswrapper[4675]: I0124 06:57:57.880343 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.165810 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.468959 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.508688 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 24 06:57:58 crc kubenswrapper[4675]: I0124 06:57:58.524853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.034337 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.106034 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.304305 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.394885 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:57:59 crc kubenswrapper[4675]: I0124 06:57:59.490300 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 24 06:58:00 crc kubenswrapper[4675]: I0124 06:58:00.222074 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 24 06:58:18 crc kubenswrapper[4675]: I0124 06:58:18.820414 4675 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.968352 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.974931 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.974979 4675 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297" exitCode=137 Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.975007 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f62a39af41c2cb5054295c1a689b581d284c46d4df0acc115f7c90ad8f408297"} Jan 24 06:58:22 crc kubenswrapper[4675]: I0124 06:58:22.975046 4675 scope.go:117] "RemoveContainer" containerID="f5760d759a56a33b816ec8f825abdd998d80a0515c93133e9533ccbc38dd0c2a" Jan 24 06:58:23 crc kubenswrapper[4675]: I0124 06:58:23.987639 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 24 06:58:23 crc kubenswrapper[4675]: I0124 06:58:23.989737 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f62c520aebd80fc3733322341b021355b2ab863b0291a2704d4c4ab9c661bf31"} Jan 24 06:58:31 crc kubenswrapper[4675]: I0124 06:58:31.442373 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:32 crc kubenswrapper[4675]: I0124 06:58:32.842255 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:32 crc kubenswrapper[4675]: I0124 06:58:32.847023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:33 crc kubenswrapper[4675]: I0124 06:58:33.038161 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.267334 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.269239 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" containerID="cri-o://7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" gracePeriod=30 Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.272569 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.272921 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" containerID="cri-o://d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" gracePeriod=30 Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.624311 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.666118 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796536 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796577 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796665 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796830 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796853 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") pod \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\" (UID: \"5cea3fd8-8eb5-46e1-9991-ec1096d357e5\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.796885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") pod \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\" (UID: \"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d\") " Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798042 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config" (OuterVolumeSpecName: "config") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798183 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798253 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config" (OuterVolumeSpecName: "config") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.798357 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802079 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc" (OuterVolumeSpecName: "kube-api-access-xvzrc") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "kube-api-access-xvzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802120 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" (UID: "668cf0e9-4fc7-442c-a8b4-3783d8fadb6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802185 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l" (OuterVolumeSpecName: "kube-api-access-xdk9l") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "kube-api-access-xdk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.802701 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5cea3fd8-8eb5-46e1-9991-ec1096d357e5" (UID: "5cea3fd8-8eb5-46e1-9991-ec1096d357e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898697 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898802 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898830 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdk9l\" (UniqueName: \"kubernetes.io/projected/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-kube-api-access-xdk9l\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898860 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvzrc\" (UniqueName: \"kubernetes.io/projected/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-kube-api-access-xvzrc\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898886 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898909 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898962 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.898991 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cea3fd8-8eb5-46e1-9991-ec1096d357e5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:43 crc kubenswrapper[4675]: I0124 06:58:43.899016 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085793 4675 generic.go:334] "Generic (PLEG): container finished" podID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" exitCode=0 Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085866 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerDied","Data":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085895 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" event={"ID":"5cea3fd8-8eb5-46e1-9991-ec1096d357e5","Type":"ContainerDied","Data":"d68bea8b00c026526be03f959939477c57040be0a40f40783ac0e65d642a96db"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.085913 4675 scope.go:117] "RemoveContainer" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.086009 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094739 4675 generic.go:334] "Generic (PLEG): container finished" podID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" exitCode=0 Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerDied","Data":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094815 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" event={"ID":"668cf0e9-4fc7-442c-a8b4-3783d8fadb6d","Type":"ContainerDied","Data":"2617a8d5990bca5860fe83af255dca72d1f078c4ac17075407e8e2d08aa3e5d0"} Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.094867 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tgs5c" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.113755 4675 scope.go:117] "RemoveContainer" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: E0124 06:58:44.114088 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": container with ID starting with d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9 not found: ID does not exist" containerID="d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.114119 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9"} err="failed to get container status \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": rpc error: code = NotFound desc = could not find container \"d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9\": container with ID starting with d7898ff1979a46f43b37a4f95a1746b089b329cb3df3a3d3d55367c7c60c32d9 not found: ID does not exist" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.114139 4675 scope.go:117] "RemoveContainer" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.124572 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.128641 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xcztf"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.133282 4675 scope.go:117] "RemoveContainer" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: E0124 06:58:44.133707 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": container with ID starting with 7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6 not found: ID does not exist" containerID="7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.133781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6"} err="failed to get container status \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": rpc error: code = NotFound desc = could not find container \"7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6\": container with ID starting with 7359e7fd285c14c9029b6e0eccfb23b608c443767f114c4ecd10fe74c8bb36d6 not found: ID does not exist" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.134855 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.138785 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tgs5c"] Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.950074 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" path="/var/lib/kubelet/pods/5cea3fd8-8eb5-46e1-9991-ec1096d357e5/volumes" Jan 24 06:58:44 crc kubenswrapper[4675]: I0124 06:58:44.951288 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" path="/var/lib/kubelet/pods/668cf0e9-4fc7-442c-a8b4-3783d8fadb6d/volumes" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290424 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: E0124 06:58:45.290734 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290747 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: E0124 06:58:45.290763 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290771 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290874 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cea3fd8-8eb5-46e1-9991-ec1096d357e5" containerName="route-controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.290895 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="668cf0e9-4fc7-442c-a8b4-3783d8fadb6d" containerName="controller-manager" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.291371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293810 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293833 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.293873 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.294623 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299247 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299576 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.299920 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300171 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300262 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300317 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.300637 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.301082 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.301108 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.302064 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.302298 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.311694 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.335280 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415081 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415201 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415331 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415437 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.415459 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517069 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517091 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517125 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517142 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517163 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.517239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518868 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-client-ca\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.518996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b80055b-992d-4252-9e05-54057a97a274-config\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.519199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.519277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.522213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b80055b-992d-4252-9e05-54057a97a274-serving-cert\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.522212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.534544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkrn\" (UniqueName: \"kubernetes.io/projected/4b80055b-992d-4252-9e05-54057a97a274-kube-api-access-5lkrn\") pod \"route-controller-manager-759899fff7-zbcrf\" (UID: \"4b80055b-992d-4252-9e05-54057a97a274\") " pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.542884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"controller-manager-5d69464474-s4wsb\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.614579 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.625104 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.844462 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:58:45 crc kubenswrapper[4675]: I0124 06:58:45.892262 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf"] Jan 24 06:58:45 crc kubenswrapper[4675]: W0124 06:58:45.900128 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b80055b_992d_4252_9e05_54057a97a274.slice/crio-94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442 WatchSource:0}: Error finding container 94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442: Status 404 returned error can't find the container with id 94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442 Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.118742 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerStarted","Data":"26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.119081 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.119094 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerStarted","Data":"96e565553cb619087fdc5ef912bf40674e76c71df4883bea6af8efc4c2e49f0f"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" event={"ID":"4b80055b-992d-4252-9e05-54057a97a274","Type":"ContainerStarted","Data":"71f59b8b0da8f254e0f3868e35321280d2af023900cdefe9338ec0c961ca6d40"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121424 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" event={"ID":"4b80055b-992d-4252-9e05-54057a97a274","Type":"ContainerStarted","Data":"94747f22b14af3784fa52a4342749cd119cf9babc290c9557587a4f1b6cc3442"} Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.121639 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.124790 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.136630 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" podStartSLOduration=3.136612671 podStartE2EDuration="3.136612671s" podCreationTimestamp="2026-01-24 06:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:58:46.134661463 +0000 UTC m=+327.430766706" watchObservedRunningTime="2026-01-24 06:58:46.136612671 +0000 UTC m=+327.432717904" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.188644 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" podStartSLOduration=3.188491475 podStartE2EDuration="3.188491475s" podCreationTimestamp="2026-01-24 06:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:58:46.186023526 +0000 UTC m=+327.482128759" watchObservedRunningTime="2026-01-24 06:58:46.188491475 +0000 UTC m=+327.484596698" Jan 24 06:58:46 crc kubenswrapper[4675]: I0124 06:58:46.495579 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-759899fff7-zbcrf" Jan 24 06:59:00 crc kubenswrapper[4675]: I0124 06:59:00.308506 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:00 crc kubenswrapper[4675]: I0124 06:59:00.309301 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" containerID="cri-o://26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" gracePeriod=30 Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.198627 4675 generic.go:334] "Generic (PLEG): container finished" podID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerID="26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" exitCode=0 Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.198951 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerDied","Data":"26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6"} Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.352626 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.375989 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:01 crc kubenswrapper[4675]: E0124 06:59:01.376175 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376186 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376279 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" containerName="controller-manager" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.376635 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.424503 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519462 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519531 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519572 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519603 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519645 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") pod \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\" (UID: \"9a2ff8b9-89c4-4f23-863e-45f020ace61d\") " Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519826 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519893 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519909 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.519924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.520488 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config" (OuterVolumeSpecName: "config") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.520802 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.521412 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.524743 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt" (OuterVolumeSpecName: "kube-api-access-sbqwt") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "kube-api-access-sbqwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.530534 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a2ff8b9-89c4-4f23-863e-45f020ace61d" (UID: "9a2ff8b9-89c4-4f23-863e-45f020ace61d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621212 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621330 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621474 4675 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a2ff8b9-89c4-4f23-863e-45f020ace61d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621489 4675 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621502 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqwt\" (UniqueName: \"kubernetes.io/projected/9a2ff8b9-89c4-4f23-863e-45f020ace61d-kube-api-access-sbqwt\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621516 4675 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.621528 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2ff8b9-89c4-4f23-863e-45f020ace61d-config\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.622907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-client-ca\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.623121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-config\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.624746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d81b726-5276-4f17-aee8-ef3ec176c910-proxy-ca-bundles\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.625555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d81b726-5276-4f17-aee8-ef3ec176c910-serving-cert\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.636663 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j96l6\" (UniqueName: \"kubernetes.io/projected/3d81b726-5276-4f17-aee8-ef3ec176c910-kube-api-access-j96l6\") pod \"controller-manager-6f869b48f5-5s42m\" (UID: \"3d81b726-5276-4f17-aee8-ef3ec176c910\") " pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:01 crc kubenswrapper[4675]: I0124 06:59:01.735837 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.157116 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f869b48f5-5s42m"] Jan 24 06:59:02 crc kubenswrapper[4675]: W0124 06:59:02.162310 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d81b726_5276_4f17_aee8_ef3ec176c910.slice/crio-81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda WatchSource:0}: Error finding container 81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda: Status 404 returned error can't find the container with id 81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.204632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" event={"ID":"3d81b726-5276-4f17-aee8-ef3ec176c910","Type":"ContainerStarted","Data":"81a9b0f2ebc561ebe726b6829df4d7d344006c3b6540b517765ede2d8fe18cda"} Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.205868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" event={"ID":"9a2ff8b9-89c4-4f23-863e-45f020ace61d","Type":"ContainerDied","Data":"96e565553cb619087fdc5ef912bf40674e76c71df4883bea6af8efc4c2e49f0f"} Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.205903 4675 scope.go:117] "RemoveContainer" containerID="26fcfb0c650c6cecc42e1a08207ec4eefa8fb176b005855ed7ecb45aa844e6f6" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.206020 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d69464474-s4wsb" Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.240943 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.245016 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d69464474-s4wsb"] Jan 24 06:59:02 crc kubenswrapper[4675]: I0124 06:59:02.948182 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2ff8b9-89c4-4f23-863e-45f020ace61d" path="/var/lib/kubelet/pods/9a2ff8b9-89c4-4f23-863e-45f020ace61d/volumes" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.212906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" event={"ID":"3d81b726-5276-4f17-aee8-ef3ec176c910","Type":"ContainerStarted","Data":"a81f66b2038bb77432f712cea1f7945a7f39d0accacf48f957a8329c1ac75db6"} Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.214213 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.219658 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" Jan 24 06:59:03 crc kubenswrapper[4675]: I0124 06:59:03.237201 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f869b48f5-5s42m" podStartSLOduration=3.237179841 podStartE2EDuration="3.237179841s" podCreationTimestamp="2026-01-24 06:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:59:03.230659714 +0000 UTC m=+344.526764937" watchObservedRunningTime="2026-01-24 06:59:03.237179841 +0000 UTC m=+344.533285064" Jan 24 06:59:08 crc kubenswrapper[4675]: I0124 06:59:08.629996 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:59:08 crc kubenswrapper[4675]: I0124 06:59:08.630434 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.976336 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.977233 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l7z59" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" containerID="cri-o://f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" gracePeriod=30 Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.988338 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:17 crc kubenswrapper[4675]: I0124 06:59:17.990118 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmxj8" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" containerID="cri-o://ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.003026 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.003516 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" containerID="cri-o://eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.016524 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.016897 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f482d" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" containerID="cri-o://c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.027760 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.027986 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" containerID="cri-o://a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" gracePeriod=30 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.032315 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.033096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.050914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.139106 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgv9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.139299 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.150155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251044 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251106 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.251140 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.252587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.257006 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.271398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8kp\" (UniqueName: \"kubernetes.io/projected/83c80cb7-74c3-417a-8d8e-54cdcf640b5b-kube-api-access-hk8kp\") pod \"marketplace-operator-79b997595-9cx7r\" (UID: \"83c80cb7-74c3-417a-8d8e-54cdcf640b5b\") " pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.314390 4675 generic.go:334] "Generic (PLEG): container finished" podID="1165063b-e2f9-406a-86c7-0559c419d043" containerID="f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.314447 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.316261 4675 generic.go:334] "Generic (PLEG): container finished" podID="26a336bf-741a-462c-bafd-9ff5e4838956" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.316322 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.318103 4675 generic.go:334] "Generic (PLEG): container finished" podID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerID="ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.318150 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.319790 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerID="eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.319829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerDied","Data":"eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.322070 4675 generic.go:334] "Generic (PLEG): container finished" podID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerID="c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" exitCode=0 Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.322192 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c"} Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.348212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.730919 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9cx7r"] Jan 24 06:59:18 crc kubenswrapper[4675]: I0124 06:59:18.975371 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.083206 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.099964 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.100008 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") pod \"1165063b-e2f9-406a-86c7-0559c419d043\" (UID: \"1165063b-e2f9-406a-86c7-0559c419d043\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.106201 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities" (OuterVolumeSpecName: "utilities") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.114368 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp" (OuterVolumeSpecName: "kube-api-access-rm4jp") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "kube-api-access-rm4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.159087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1165063b-e2f9-406a-86c7-0559c419d043" (UID: "1165063b-e2f9-406a-86c7-0559c419d043"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204574 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204612 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165063b-e2f9-406a-86c7-0559c419d043-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.204628 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm4jp\" (UniqueName: \"kubernetes.io/projected/1165063b-e2f9-406a-86c7-0559c419d043-kube-api-access-rm4jp\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.265040 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305690 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305756 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.305878 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") pod \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\" (UID: \"0eebacf7-e6c0-4fad-a868-ed067f1b1acc\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.308922 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities" (OuterVolumeSpecName: "utilities") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.309003 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9" (OuterVolumeSpecName: "kube-api-access-s4zq9") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "kube-api-access-s4zq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.316864 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.317801 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.318025 4675 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" cmd=["grpc_health_probe","-addr=:50051"] Jan 24 06:59:19 crc kubenswrapper[4675]: E0124 06:59:19.318053 4675 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-6vjtj" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.328876 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.338804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxj8" event={"ID":"58002d63-9bc7-4470-a2ae-9be6e2828136","Type":"ContainerDied","Data":"ee8ef93d6dbda9d79ddf1313f70a0d90a2db3cc78f034c04f57e14a671da3bf7"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.338883 4675 scope.go:117] "RemoveContainer" containerID="ad43223b1b4489aeea4bb97c6915fdb5cd57e53e431523d75be8478f75c6178f" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.347036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eebacf7-e6c0-4fad-a868-ed067f1b1acc" (UID: "0eebacf7-e6c0-4fad-a868-ed067f1b1acc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.352099 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.360359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f482d" event={"ID":"0eebacf7-e6c0-4fad-a868-ed067f1b1acc","Type":"ContainerDied","Data":"236fdbdb18d6c63d9e15929a4a294f390be7b67da4280698a6623d3338464d82"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.360512 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f482d" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.363113 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.367678 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjtj" event={"ID":"26a336bf-741a-462c-bafd-9ff5e4838956","Type":"ContainerDied","Data":"17da6fe8ca04e09fc66d1d33d6a6d431e601614c667f17a5807f9476665435d9"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.367826 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjtj" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.379997 4675 scope.go:117] "RemoveContainer" containerID="b18e9709b62b8f7ea17174ebecf1128ccaae80aa9075eae9177465b80767c745" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.386668 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l7z59" event={"ID":"1165063b-e2f9-406a-86c7-0559c419d043","Type":"ContainerDied","Data":"207126b350e6a988e2c0611799f1606a299f405d91cfae55c96cd51fac72006a"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.387038 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l7z59" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.390584 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.390790 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgv9v" event={"ID":"b1a4e6f5-492a-4b32-aa94-c8eca20b0067","Type":"ContainerDied","Data":"4183cc63d47ed05819d502c422e1c423e9c066190ca15b760cb785c93f9da8c8"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.392153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" event={"ID":"83c80cb7-74c3-417a-8d8e-54cdcf640b5b","Type":"ContainerStarted","Data":"7315a16df384d876eeaa622c2c21e57a267d18e8770a0de409178f9ba53b2f51"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.392183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" event={"ID":"83c80cb7-74c3-417a-8d8e-54cdcf640b5b","Type":"ContainerStarted","Data":"37afc4e8844756788f5eacf1da01b54d629e0fb7323e672007240e8259f6fb25"} Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.393195 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.401225 4675 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9cx7r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.401284 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" podUID="83c80cb7-74c3-417a-8d8e-54cdcf640b5b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407500 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407555 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407587 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407697 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407752 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") pod \"26a336bf-741a-462c-bafd-9ff5e4838956\" (UID: \"26a336bf-741a-462c-bafd-9ff5e4838956\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407798 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") pod \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\" (UID: \"b1a4e6f5-492a-4b32-aa94-c8eca20b0067\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.407851 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") pod \"58002d63-9bc7-4470-a2ae-9be6e2828136\" (UID: \"58002d63-9bc7-4470-a2ae-9be6e2828136\") " Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408087 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zq9\" (UniqueName: \"kubernetes.io/projected/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-kube-api-access-s4zq9\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408107 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.408120 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eebacf7-e6c0-4fad-a868-ed067f1b1acc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.410982 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities" (OuterVolumeSpecName: "utilities") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.411082 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.412857 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities" (OuterVolumeSpecName: "utilities") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.413704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2" (OuterVolumeSpecName: "kube-api-access-szrw2") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "kube-api-access-szrw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.415968 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj" (OuterVolumeSpecName: "kube-api-access-wfwmj") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "kube-api-access-wfwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.417295 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.418907 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p" (OuterVolumeSpecName: "kube-api-access-qw26p") pod "b1a4e6f5-492a-4b32-aa94-c8eca20b0067" (UID: "b1a4e6f5-492a-4b32-aa94-c8eca20b0067"). InnerVolumeSpecName "kube-api-access-qw26p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.418318 4675 scope.go:117] "RemoveContainer" containerID="27e8eea471043c3df40a37d217289a9d5547edf9d7cc2d893fdce5d2d206a098" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.456013 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.463232 4675 scope.go:117] "RemoveContainer" containerID="c15fa43487111e1d485b4196d8624a7f782747a8f0a151642273c5900aacb11c" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.468824 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f482d"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.471144 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" podStartSLOduration=1.471128373 podStartE2EDuration="1.471128373s" podCreationTimestamp="2026-01-24 06:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 06:59:19.439255551 +0000 UTC m=+360.735360774" watchObservedRunningTime="2026-01-24 06:59:19.471128373 +0000 UTC m=+360.767233596" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.478117 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.483516 4675 scope.go:117] "RemoveContainer" containerID="acc5dc0c07c3a0b5401b6f9bc7ce29ec56cf35e994494b4063328ab3e6990f50" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.484262 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l7z59"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.492108 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58002d63-9bc7-4470-a2ae-9be6e2828136" (UID: "58002d63-9bc7-4470-a2ae-9be6e2828136"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.500712 4675 scope.go:117] "RemoveContainer" containerID="eb71624ab1714e3b868179ef8715f76bbb985e9ee1eb32ef5ea46430a5377ae3" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508893 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508917 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szrw2\" (UniqueName: \"kubernetes.io/projected/58002d63-9bc7-4470-a2ae-9be6e2828136-kube-api-access-szrw2\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508926 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw26p\" (UniqueName: \"kubernetes.io/projected/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-kube-api-access-qw26p\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508935 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508943 4675 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a4e6f5-492a-4b32-aa94-c8eca20b0067-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508951 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508959 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfwmj\" (UniqueName: \"kubernetes.io/projected/26a336bf-741a-462c-bafd-9ff5e4838956-kube-api-access-wfwmj\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.508969 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58002d63-9bc7-4470-a2ae-9be6e2828136-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.519683 4675 scope.go:117] "RemoveContainer" containerID="a3974755152f441aac5b6437c42a8b80fbf865813e0ba3c5784a077210ab2376" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.534599 4675 scope.go:117] "RemoveContainer" containerID="1e6ff790a8ef2685983150316ed55a0d1390d5076678d43aeeb0f36eeb83ccdd" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.557027 4675 scope.go:117] "RemoveContainer" containerID="950ce170714980389fc4fdc60fb6c50ac2d025bc7af1f23de6767552eb91501f" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.573628 4675 scope.go:117] "RemoveContainer" containerID="f7da7a6ea16001ac283a93fad40c7a51c75e4a7c85df4e2f006edf1afdc05e6b" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.574607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26a336bf-741a-462c-bafd-9ff5e4838956" (UID: "26a336bf-741a-462c-bafd-9ff5e4838956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.584754 4675 scope.go:117] "RemoveContainer" containerID="42b8ee55bd339ab55f41df3ff58f52b52b0d7e8bb773f48fda829b8f6ab4ed80" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.601553 4675 scope.go:117] "RemoveContainer" containerID="59eb245fda115973b3f277ca4c5731837caa16e3bd2b40daf6b31eeaebc1bf72" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.610005 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a336bf-741a-462c-bafd-9ff5e4838956-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.613888 4675 scope.go:117] "RemoveContainer" containerID="eab0bc055c4be21ea7dee6f7dc7e94d0bda87b2e1b4295b18b3ab5807bb0774b" Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.690916 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.694148 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vjtj"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.714678 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:19 crc kubenswrapper[4675]: I0124 06:59:19.721817 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgv9v"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191052 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191287 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191302 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191317 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191325 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191336 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191345 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191357 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191365 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191380 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191390 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191407 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191417 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191427 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191434 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191444 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191452 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191463 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191481 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191490 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-content" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191503 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191510 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191520 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191527 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: E0124 06:59:20.191542 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191552 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="extract-utilities" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191701 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191964 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191985 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1165063b-e2f9-406a-86c7-0559c419d043" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.191999 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" containerName="marketplace-operator" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.192098 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" containerName="registry-server" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.193124 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.198840 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.203733 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.220611 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.220959 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.221107 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.322697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.323881 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-catalog-content\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.324280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b49920-8f11-4ffb-84f0-930d921f722d-utilities\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.349311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mppcx\" (UniqueName: \"kubernetes.io/projected/b4b49920-8f11-4ffb-84f0-930d921f722d-kube-api-access-mppcx\") pod \"redhat-marketplace-qrkr2\" (UID: \"b4b49920-8f11-4ffb-84f0-930d921f722d\") " pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.385597 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.386510 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.389335 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.396060 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.401888 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxj8" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.414529 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9cx7r" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425429 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.425500 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.450741 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.456792 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmxj8"] Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.517010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526154 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526696 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-catalog-content\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.526767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.527294 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e2d7dc-bba1-4021-a095-98a4feb924da-utilities\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.550238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgjc\" (UniqueName: \"kubernetes.io/projected/96e2d7dc-bba1-4021-a095-98a4feb924da-kube-api-access-8xgjc\") pod \"redhat-operators-2zdff\" (UID: \"96e2d7dc-bba1-4021-a095-98a4feb924da\") " pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.704760 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.917498 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrkr2"] Jan 24 06:59:20 crc kubenswrapper[4675]: W0124 06:59:20.920147 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4b49920_8f11_4ffb_84f0_930d921f722d.slice/crio-50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259 WatchSource:0}: Error finding container 50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259: Status 404 returned error can't find the container with id 50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259 Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.948398 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eebacf7-e6c0-4fad-a868-ed067f1b1acc" path="/var/lib/kubelet/pods/0eebacf7-e6c0-4fad-a868-ed067f1b1acc/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.949180 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1165063b-e2f9-406a-86c7-0559c419d043" path="/var/lib/kubelet/pods/1165063b-e2f9-406a-86c7-0559c419d043/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.949729 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a336bf-741a-462c-bafd-9ff5e4838956" path="/var/lib/kubelet/pods/26a336bf-741a-462c-bafd-9ff5e4838956/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.950779 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58002d63-9bc7-4470-a2ae-9be6e2828136" path="/var/lib/kubelet/pods/58002d63-9bc7-4470-a2ae-9be6e2828136/volumes" Jan 24 06:59:20 crc kubenswrapper[4675]: I0124 06:59:20.951336 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a4e6f5-492a-4b32-aa94-c8eca20b0067" path="/var/lib/kubelet/pods/b1a4e6f5-492a-4b32-aa94-c8eca20b0067/volumes" Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.094134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zdff"] Jan 24 06:59:21 crc kubenswrapper[4675]: W0124 06:59:21.116024 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e2d7dc_bba1_4021_a095_98a4feb924da.slice/crio-4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c WatchSource:0}: Error finding container 4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c: Status 404 returned error can't find the container with id 4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415829 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b49920-8f11-4ffb-84f0-930d921f722d" containerID="ccaddfa5705fe148ac5795014f603d990719078f6b694bc09be4c33394a83f93" exitCode=0 Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerDied","Data":"ccaddfa5705fe148ac5795014f603d990719078f6b694bc09be4c33394a83f93"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.415910 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerStarted","Data":"50100a21e450486bacc56aebbf06c49d06db9fba02ca317289b3ffbbe0b37259"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420225 4675 generic.go:334] "Generic (PLEG): container finished" podID="96e2d7dc-bba1-4021-a095-98a4feb924da" containerID="5b167e690acee760ac42a4a289df373cf19e52976144fa4be335c52e57eaa6ad" exitCode=0 Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420741 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerDied","Data":"5b167e690acee760ac42a4a289df373cf19e52976144fa4be335c52e57eaa6ad"} Jan 24 06:59:21 crc kubenswrapper[4675]: I0124 06:59:21.420782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"4010b2a17b83113f70b394d20780ab8b7bcd196b96a51d83e4992b0990a8fc4c"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.431938 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b49920-8f11-4ffb-84f0-930d921f722d" containerID="65b02291eb6da3105f095eba4028ddbb473b0c6f2e8e52de9ef85c9f4e5201b5" exitCode=0 Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.432116 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerDied","Data":"65b02291eb6da3105f095eba4028ddbb473b0c6f2e8e52de9ef85c9f4e5201b5"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.440684 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8"} Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.593540 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.600339 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.602847 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.613749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653312 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.653352 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755009 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755095 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-utilities\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.755999 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74192ba-e384-473f-8b1f-5acf16fcf6cb-catalog-content\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.775522 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkkj\" (UniqueName: \"kubernetes.io/projected/c74192ba-e384-473f-8b1f-5acf16fcf6cb-kube-api-access-njkkj\") pod \"certified-operators-bsdlx\" (UID: \"c74192ba-e384-473f-8b1f-5acf16fcf6cb\") " pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.791620 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.796843 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.800203 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.806902 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.856860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.857241 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.857267 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.925427 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.958909 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959482 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-catalog-content\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.959918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ba4e7-d34e-49ce-a0fa-628261617832-utilities\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:22 crc kubenswrapper[4675]: I0124 06:59:22.977814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85t57\" (UniqueName: \"kubernetes.io/projected/c82ba4e7-d34e-49ce-a0fa-628261617832-kube-api-access-85t57\") pod \"community-operators-25b5x\" (UID: \"c82ba4e7-d34e-49ce-a0fa-628261617832\") " pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.129546 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.378991 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsdlx"] Jan 24 06:59:23 crc kubenswrapper[4675]: W0124 06:59:23.387151 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74192ba_e384_473f_8b1f_5acf16fcf6cb.slice/crio-7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70 WatchSource:0}: Error finding container 7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70: Status 404 returned error can't find the container with id 7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70 Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.460828 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrkr2" event={"ID":"b4b49920-8f11-4ffb-84f0-930d921f722d","Type":"ContainerStarted","Data":"2263357c7498bfc162aea56406a3919aaa1fcfc6868f72fe6336bc2e318074e9"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.468409 4675 generic.go:334] "Generic (PLEG): container finished" podID="96e2d7dc-bba1-4021-a095-98a4feb924da" containerID="a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8" exitCode=0 Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.468696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerDied","Data":"a985913cd2b7a8f1a29a5a0e2ede47d567f18907cdc04794b06e93451d2719e8"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.473176 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"7861865b9226f33984c76968633172a013ddd79ce08e8f2362590c6e5c3c9a70"} Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.491643 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qrkr2" podStartSLOduration=2.04499028 podStartE2EDuration="3.491625556s" podCreationTimestamp="2026-01-24 06:59:20 +0000 UTC" firstStartedPulling="2026-01-24 06:59:21.417510428 +0000 UTC m=+362.713615651" lastFinishedPulling="2026-01-24 06:59:22.864145704 +0000 UTC m=+364.160250927" observedRunningTime="2026-01-24 06:59:23.491248877 +0000 UTC m=+364.787354100" watchObservedRunningTime="2026-01-24 06:59:23.491625556 +0000 UTC m=+364.787730779" Jan 24 06:59:23 crc kubenswrapper[4675]: I0124 06:59:23.533469 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25b5x"] Jan 24 06:59:23 crc kubenswrapper[4675]: W0124 06:59:23.541219 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82ba4e7_d34e_49ce_a0fa_628261617832.slice/crio-ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b WatchSource:0}: Error finding container ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b: Status 404 returned error can't find the container with id ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.480261 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zdff" event={"ID":"96e2d7dc-bba1-4021-a095-98a4feb924da","Type":"ContainerStarted","Data":"a82eee3b291fcb3b1be2512271cfa6fc559bb6b1a9afb3bfba33c3cea8d6b40a"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482842 4675 generic.go:334] "Generic (PLEG): container finished" podID="c82ba4e7-d34e-49ce-a0fa-628261617832" containerID="7af8a994af912e2c9f4b56adfaeac2a237c39b125b2dbf596993d850e495bc5a" exitCode=0 Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerDied","Data":"7af8a994af912e2c9f4b56adfaeac2a237c39b125b2dbf596993d850e495bc5a"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.482955 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerStarted","Data":"ca630d43e07b0a5f905d0f3d849801994dec3d41d00c4d0dbd4dc07f5a59b65b"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.484256 4675 generic.go:334] "Generic (PLEG): container finished" podID="c74192ba-e384-473f-8b1f-5acf16fcf6cb" containerID="a3db1b5fb74942c7678ba08dbe353b4ae8805d6a490dee7ba309e6668bcf1ae8" exitCode=0 Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.484352 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerDied","Data":"a3db1b5fb74942c7678ba08dbe353b4ae8805d6a490dee7ba309e6668bcf1ae8"} Jan 24 06:59:24 crc kubenswrapper[4675]: I0124 06:59:24.528616 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zdff" podStartSLOduration=1.869262425 podStartE2EDuration="4.528598923s" podCreationTimestamp="2026-01-24 06:59:20 +0000 UTC" firstStartedPulling="2026-01-24 06:59:21.421159576 +0000 UTC m=+362.717264789" lastFinishedPulling="2026-01-24 06:59:24.080496064 +0000 UTC m=+365.376601287" observedRunningTime="2026-01-24 06:59:24.506620631 +0000 UTC m=+365.802725854" watchObservedRunningTime="2026-01-24 06:59:24.528598923 +0000 UTC m=+365.824704146" Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.490706 4675 generic.go:334] "Generic (PLEG): container finished" podID="c82ba4e7-d34e-49ce-a0fa-628261617832" containerID="b04406b1c9b8c3f7c73fece492d531ae08bf545b760278cf923d81fafb1190bd" exitCode=0 Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.490751 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerDied","Data":"b04406b1c9b8c3f7c73fece492d531ae08bf545b760278cf923d81fafb1190bd"} Jan 24 06:59:25 crc kubenswrapper[4675]: I0124 06:59:25.494491 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.501461 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25b5x" event={"ID":"c82ba4e7-d34e-49ce-a0fa-628261617832","Type":"ContainerStarted","Data":"ecd9551db7e81d40febc0ce99d00250b428edf061db986fa7f7141ef1f9d7ea9"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.503047 4675 generic.go:334] "Generic (PLEG): container finished" podID="c74192ba-e384-473f-8b1f-5acf16fcf6cb" containerID="49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923" exitCode=0 Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.503098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerDied","Data":"49bf9b5a0c96a7a5d3c567d0fd076ea25dccae51e74d4e5be91306c8eab73923"} Jan 24 06:59:26 crc kubenswrapper[4675]: I0124 06:59:26.525929 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25b5x" podStartSLOduration=3.110505402 podStartE2EDuration="4.525908121s" podCreationTimestamp="2026-01-24 06:59:22 +0000 UTC" firstStartedPulling="2026-01-24 06:59:24.484450434 +0000 UTC m=+365.780555657" lastFinishedPulling="2026-01-24 06:59:25.899853153 +0000 UTC m=+367.195958376" observedRunningTime="2026-01-24 06:59:26.518996063 +0000 UTC m=+367.815101286" watchObservedRunningTime="2026-01-24 06:59:26.525908121 +0000 UTC m=+367.822013344" Jan 24 06:59:27 crc kubenswrapper[4675]: I0124 06:59:27.510367 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsdlx" event={"ID":"c74192ba-e384-473f-8b1f-5acf16fcf6cb","Type":"ContainerStarted","Data":"f41cbc17f47cd7d58a62f3200b3f735b68f52664142d072cf0a64ca1459891b4"} Jan 24 06:59:27 crc kubenswrapper[4675]: I0124 06:59:27.536555 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsdlx" podStartSLOduration=3.107682034 podStartE2EDuration="5.53653303s" podCreationTimestamp="2026-01-24 06:59:22 +0000 UTC" firstStartedPulling="2026-01-24 06:59:24.485486769 +0000 UTC m=+365.781591992" lastFinishedPulling="2026-01-24 06:59:26.914337755 +0000 UTC m=+368.210442988" observedRunningTime="2026-01-24 06:59:27.532608674 +0000 UTC m=+368.828713897" watchObservedRunningTime="2026-01-24 06:59:27.53653303 +0000 UTC m=+368.832638253" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.518274 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.519683 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.576198 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.619786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qrkr2" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.705793 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.705839 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:30 crc kubenswrapper[4675]: I0124 06:59:30.751936 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:31 crc kubenswrapper[4675]: I0124 06:59:31.578087 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zdff" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.926966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.927223 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:32 crc kubenswrapper[4675]: I0124 06:59:32.963922 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.130401 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.130852 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.168831 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.577968 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25b5x" Jan 24 06:59:33 crc kubenswrapper[4675]: I0124 06:59:33.578030 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsdlx" Jan 24 06:59:38 crc kubenswrapper[4675]: I0124 06:59:38.629801 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 06:59:38 crc kubenswrapper[4675]: I0124 06:59:38.630303 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.191344 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.192589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.195540 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.197554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.209389 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.375947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.376035 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.376188 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.478386 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.480352 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.488569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.501887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"collect-profiles-29487300-n24zh\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:00 crc kubenswrapper[4675]: I0124 07:00:00.511317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.126640 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697391 4675 generic.go:334] "Generic (PLEG): container finished" podID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerID="2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f" exitCode=0 Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697433 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerDied","Data":"2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f"} Jan 24 07:00:01 crc kubenswrapper[4675]: I0124 07:00:01.697455 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerStarted","Data":"034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88"} Jan 24 07:00:02 crc kubenswrapper[4675]: I0124 07:00:02.968155 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012238 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012393 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.012434 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") pod \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\" (UID: \"df2777ca-be51-4dc3-b7da-d84bd7ca16c4\") " Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.013125 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.017076 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.019883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv" (OuterVolumeSpecName: "kube-api-access-k5mcv") pod "df2777ca-be51-4dc3-b7da-d84bd7ca16c4" (UID: "df2777ca-be51-4dc3-b7da-d84bd7ca16c4"). InnerVolumeSpecName "kube-api-access-k5mcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113703 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113781 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mcv\" (UniqueName: \"kubernetes.io/projected/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-kube-api-access-k5mcv\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.113799 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2777ca-be51-4dc3-b7da-d84bd7ca16c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.711818 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" event={"ID":"df2777ca-be51-4dc3-b7da-d84bd7ca16c4","Type":"ContainerDied","Data":"034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88"} Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.711871 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034e1e25b01d6f9441e5a8f075c8606c7081c4408951432813c333585de01e88" Jan 24 07:00:03 crc kubenswrapper[4675]: I0124 07:00:03.712306 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630167 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630533 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.630584 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.631208 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:00:08 crc kubenswrapper[4675]: I0124 07:00:08.631263 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" gracePeriod=600 Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.751930 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" exitCode=0 Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.751958 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e"} Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.752329 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} Jan 24 07:00:09 crc kubenswrapper[4675]: I0124 07:00:09.752358 4675 scope.go:117] "RemoveContainer" containerID="c0b1d7698b7be768a8169bd645ffc2de860b3e7d2af92495ddb45abf74ec8d82" Jan 24 07:02:08 crc kubenswrapper[4675]: I0124 07:02:08.630664 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:02:08 crc kubenswrapper[4675]: I0124 07:02:08.631385 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:02:38 crc kubenswrapper[4675]: I0124 07:02:38.629834 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:02:38 crc kubenswrapper[4675]: I0124 07:02:38.632372 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.542415 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:48 crc kubenswrapper[4675]: E0124 07:02:48.543513 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.543532 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.543668 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" containerName="collect-profiles" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.544191 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.573098 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672374 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672505 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.672563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.702050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773620 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773786 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773830 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.773848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.774288 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a94693ba-91c6-4366-bc2f-c67b2dbea343-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.775227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-trusted-ca\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.775554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-certificates\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.781301 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a94693ba-91c6-4366-bc2f-c67b2dbea343-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.781957 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-registry-tls\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.789205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-bound-sa-token\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.791091 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrgk8\" (UniqueName: \"kubernetes.io/projected/a94693ba-91c6-4366-bc2f-c67b2dbea343-kube-api-access-nrgk8\") pod \"image-registry-66df7c8f76-q5pvn\" (UID: \"a94693ba-91c6-4366-bc2f-c67b2dbea343\") " pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:48 crc kubenswrapper[4675]: I0124 07:02:48.861114 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:02:49 crc kubenswrapper[4675]: I0124 07:02:49.092934 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q5pvn"] Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.015284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" event={"ID":"a94693ba-91c6-4366-bc2f-c67b2dbea343","Type":"ContainerStarted","Data":"248faf75c96272f7c3d0962643b006b8e35d82fb38af40ec30bb6c1ec025fb47"} Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.015339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" event={"ID":"a94693ba-91c6-4366-bc2f-c67b2dbea343","Type":"ContainerStarted","Data":"ba08d3ea5acf0f09561cb718452ab86b6da2813e31fe520132670188e6e0c591"} Jan 24 07:02:50 crc kubenswrapper[4675]: I0124 07:02:50.016184 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.630256 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.631202 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.631287 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.632320 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.632442 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" gracePeriod=600 Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.871092 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.908715 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-q5pvn" podStartSLOduration=20.908692328 podStartE2EDuration="20.908692328s" podCreationTimestamp="2026-01-24 07:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:02:50.053406287 +0000 UTC m=+571.349511510" watchObservedRunningTime="2026-01-24 07:03:08.908692328 +0000 UTC m=+590.204797561" Jan 24 07:03:08 crc kubenswrapper[4675]: I0124 07:03:08.938945 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127032 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" exitCode=0 Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58"} Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} Jan 24 07:03:09 crc kubenswrapper[4675]: I0124 07:03:09.127461 4675 scope.go:117] "RemoveContainer" containerID="2440d3731286e05d73f10f26f20752681529fdf7e75d4631c9fef808933d662e" Jan 24 07:03:33 crc kubenswrapper[4675]: I0124 07:03:33.998033 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" containerID="cri-o://b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" gracePeriod=30 Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.290433 4675 generic.go:334] "Generic (PLEG): container finished" podID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerID="b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" exitCode=0 Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.290938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerDied","Data":"b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727"} Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.363271 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509248 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509884 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509908 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.509981 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510013 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510119 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ef6caa30-be9c-438c-a494-8b54b5df218c\" (UID: \"ef6caa30-be9c-438c-a494-8b54b5df218c\") " Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.510815 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.512489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.520459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.520508 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.521148 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.530388 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c" (OuterVolumeSpecName: "kube-api-access-nl48c") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "kube-api-access-nl48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.530400 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.533117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ef6caa30-be9c-438c-a494-8b54b5df218c" (UID: "ef6caa30-be9c-438c-a494-8b54b5df218c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612037 4675 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef6caa30-be9c-438c-a494-8b54b5df218c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612085 4675 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612103 4675 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612120 4675 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef6caa30-be9c-438c-a494-8b54b5df218c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612138 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef6caa30-be9c-438c-a494-8b54b5df218c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612154 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl48c\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-kube-api-access-nl48c\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:34 crc kubenswrapper[4675]: I0124 07:03:34.612170 4675 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef6caa30-be9c-438c-a494-8b54b5df218c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298286 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" event={"ID":"ef6caa30-be9c-438c-a494-8b54b5df218c","Type":"ContainerDied","Data":"26c4e5324526d12f61e99166a7be6e0bc691153acfad2f89632826b7fd39d68c"} Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298614 4675 scope.go:117] "RemoveContainer" containerID="b38f62575b27bbaf36bcbcd3b1779bbb533c0972feed363dabac23c4bdb0e727" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.298735 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qkls6" Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.319051 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:35 crc kubenswrapper[4675]: I0124 07:03:35.325997 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qkls6"] Jan 24 07:03:36 crc kubenswrapper[4675]: I0124 07:03:36.948288 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" path="/var/lib/kubelet/pods/ef6caa30-be9c-438c-a494-8b54b5df218c/volumes" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.067394 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:36 crc kubenswrapper[4675]: E0124 07:04:36.068305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068466 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6caa30-be9c-438c-a494-8b54b5df218c" containerName="registry" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.068938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.071458 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 24 07:04:36 crc kubenswrapper[4675]: W0124 07:04:36.071538 4675 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Jan 24 07:04:36 crc kubenswrapper[4675]: E0124 07:04:36.071684 4675 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.079226 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dq6k8" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.082516 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.083281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.086953 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.087503 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lp26r" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.087762 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.088053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.088130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.090059 4675 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jjk4k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.095047 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.099452 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.112849 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.189084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.189309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.290464 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:36 crc kubenswrapper[4675]: I0124 07:04:36.391609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.204660 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205136 4675 projected.go:194] Error preparing data for projected volume kube-api-access-fzc65 for pod cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205246 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65 podName:99008be6-effb-4dc7-a761-ee291c03f093 nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.705209294 +0000 UTC m=+679.001314557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzc65" (UniqueName: "kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65") pod "cert-manager-cainjector-cf98fcc89-6kp8k" (UID: "99008be6-effb-4dc7-a761-ee291c03f093") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205710 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205791 4675 projected.go:194] Error preparing data for projected volume kube-api-access-6xmb6 for pod cert-manager/cert-manager-858654f9db-gt7xw: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.205848 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6 podName:f9d3eaae-49ca-400c-a277-bdbad7f8125a nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.705829738 +0000 UTC m=+679.001935001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6xmb6" (UniqueName: "kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6") pod "cert-manager-858654f9db-gt7xw" (UID: "f9d3eaae-49ca-400c-a277-bdbad7f8125a") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405087 4675 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405399 4675 projected.go:194] Error preparing data for projected volume kube-api-access-7szw8 for pod cert-manager/cert-manager-webhook-687f57d79b-lthpk: failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: E0124 07:04:37.405641 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8 podName:261785a7-b436-4597-a36b-473d27769006 nodeName:}" failed. No retries permitted until 2026-01-24 07:04:37.905616591 +0000 UTC m=+679.201721834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7szw8" (UniqueName: "kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8") pod "cert-manager-webhook-687f57d79b-lthpk" (UID: "261785a7-b436-4597-a36b-473d27769006") : failed to sync configmap cache: timed out waiting for the condition Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.610675 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.730045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.730114 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.738263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc65\" (UniqueName: \"kubernetes.io/projected/99008be6-effb-4dc7-a761-ee291c03f093-kube-api-access-fzc65\") pod \"cert-manager-cainjector-cf98fcc89-6kp8k\" (UID: \"99008be6-effb-4dc7-a761-ee291c03f093\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.740152 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xmb6\" (UniqueName: \"kubernetes.io/projected/f9d3eaae-49ca-400c-a277-bdbad7f8125a-kube-api-access-6xmb6\") pod \"cert-manager-858654f9db-gt7xw\" (UID: \"f9d3eaae-49ca-400c-a277-bdbad7f8125a\") " pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.890018 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.908699 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gt7xw" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.934502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:37 crc kubenswrapper[4675]: I0124 07:04:37.942181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szw8\" (UniqueName: \"kubernetes.io/projected/261785a7-b436-4597-a36b-473d27769006-kube-api-access-7szw8\") pod \"cert-manager-webhook-687f57d79b-lthpk\" (UID: \"261785a7-b436-4597-a36b-473d27769006\") " pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.132560 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k"] Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.142880 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.190347 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gt7xw"] Jan 24 07:04:38 crc kubenswrapper[4675]: W0124 07:04:38.199067 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9d3eaae_49ca_400c_a277_bdbad7f8125a.slice/crio-fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c WatchSource:0}: Error finding container fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c: Status 404 returned error can't find the container with id fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.221911 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.389309 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-lthpk"] Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.672679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" event={"ID":"261785a7-b436-4597-a36b-473d27769006","Type":"ContainerStarted","Data":"00cf920023e6ffc4d6f7e24c2c2255859f6fad1341112ac91d2093dca19d93f3"} Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.674378 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gt7xw" event={"ID":"f9d3eaae-49ca-400c-a277-bdbad7f8125a","Type":"ContainerStarted","Data":"fb357e59212e37f0e2db66f6063a5e04bfd0f6b1e643425afad86c720c313a1c"} Jan 24 07:04:38 crc kubenswrapper[4675]: I0124 07:04:38.676480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" event={"ID":"99008be6-effb-4dc7-a761-ee291c03f093","Type":"ContainerStarted","Data":"7e29ea7bcdef375d45c9f209149a5bc35dd4337366a2cd2a90a3ac2abc01cad7"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.706995 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gt7xw" event={"ID":"f9d3eaae-49ca-400c-a277-bdbad7f8125a","Type":"ContainerStarted","Data":"5d44fe14110728fd376b736028d1e0d2642ad15c3e234d7a9e40b5223aa9f122"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.708822 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" event={"ID":"99008be6-effb-4dc7-a761-ee291c03f093","Type":"ContainerStarted","Data":"8192d9542a3def62268ceb888430d162711471563eb38543f8807b8ca11a2993"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.710941 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" event={"ID":"261785a7-b436-4597-a36b-473d27769006","Type":"ContainerStarted","Data":"5e7047018f0893b71e5e22cf91c804ab9ae3cd999814cba2e0b986229582197d"} Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.711121 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.736775 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gt7xw" podStartSLOduration=3.288538577 podStartE2EDuration="6.736754069s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.202077429 +0000 UTC m=+679.498182652" lastFinishedPulling="2026-01-24 07:04:41.650292921 +0000 UTC m=+682.946398144" observedRunningTime="2026-01-24 07:04:42.721319957 +0000 UTC m=+684.017425210" watchObservedRunningTime="2026-01-24 07:04:42.736754069 +0000 UTC m=+684.032859292" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.756109 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kp8k" podStartSLOduration=3.247615857 podStartE2EDuration="6.756092865s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.142649553 +0000 UTC m=+679.438754776" lastFinishedPulling="2026-01-24 07:04:41.651126521 +0000 UTC m=+682.947231784" observedRunningTime="2026-01-24 07:04:42.753096453 +0000 UTC m=+684.049201676" watchObservedRunningTime="2026-01-24 07:04:42.756092865 +0000 UTC m=+684.052198088" Jan 24 07:04:42 crc kubenswrapper[4675]: I0124 07:04:42.779041 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" podStartSLOduration=3.46098482 podStartE2EDuration="6.779021349s" podCreationTimestamp="2026-01-24 07:04:36 +0000 UTC" firstStartedPulling="2026-01-24 07:04:38.395030577 +0000 UTC m=+679.691135790" lastFinishedPulling="2026-01-24 07:04:41.713067096 +0000 UTC m=+683.009172319" observedRunningTime="2026-01-24 07:04:42.775227998 +0000 UTC m=+684.071333241" watchObservedRunningTime="2026-01-24 07:04:42.779021349 +0000 UTC m=+684.075126582" Jan 24 07:04:48 crc kubenswrapper[4675]: I0124 07:04:48.224042 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-lthpk" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.630395 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.631062 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.777926 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778610 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" containerID="cri-o://38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778863 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" containerID="cri-o://8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778876 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" containerID="cri-o://4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.779015 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.779043 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" containerID="cri-o://efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778850 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" containerID="cri-o://51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.778878 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" containerID="cri-o://c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.827975 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" containerID="cri-o://7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" gracePeriod=30 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.869192 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.869985 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/1.log" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870053 4675 generic.go:334] "Generic (PLEG): container finished" podID="61e129ca-c9dc-4375-b373-5eec702744bd" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" exitCode=2 Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870085 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerDied","Data":"c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b"} Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870121 4675 scope.go:117] "RemoveContainer" containerID="6c10418180001016c72fcbe5a3d14a0e4e7bae939fc8c3f6ff7abbb583376cfe" Jan 24 07:05:08 crc kubenswrapper[4675]: I0124 07:05:08.870775 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:08 crc kubenswrapper[4675]: E0124 07:05:08.871136 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.144061 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.145958 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-acl-logging/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.146313 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-controller/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.146738 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.197979 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j5cds"] Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198202 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198213 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198224 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198231 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198240 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198246 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198253 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198259 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198267 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198273 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198287 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198293 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198298 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198305 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198311 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198323 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kubecfg-setup" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198329 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kubecfg-setup" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198338 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198344 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198444 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198451 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198458 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198465 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-node" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198473 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="nbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198483 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="kube-rbac-proxy-ovn-metrics" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198491 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198497 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="northd" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198505 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="sbdb" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198544 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovn-acl-logging" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198641 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198647 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: E0124 07:05:09.198654 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198660 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198766 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.198773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerName="ovnkube-controller" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.200188 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304102 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304225 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304267 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304388 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304427 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304769 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304839 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.304971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305005 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305036 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305065 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") pod \"50a4333f-fd95-41a0-9ac8-4c21f9000870\" (UID: \"50a4333f-fd95-41a0-9ac8-4c21f9000870\") " Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305325 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305407 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305894 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305927 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305963 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305937 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log" (OuterVolumeSpecName: "node-log") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305987 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.305989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash" (OuterVolumeSpecName: "host-slash") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306036 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306068 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306133 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306177 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306373 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306414 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306445 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306269 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306406 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306448 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306517 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306553 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306601 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306637 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306664 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket" (OuterVolumeSpecName: "log-socket") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306972 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.306986 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307014 4675 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307030 4675 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-log-socket\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307042 4675 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307053 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307065 4675 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307080 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307093 4675 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307105 4675 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-node-log\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307115 4675 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307124 4675 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307134 4675 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307144 4675 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307154 4675 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307164 4675 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.307173 4675 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-host-slash\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.311299 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.311887 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz" (OuterVolumeSpecName: "kube-api-access-4qgbz") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "kube-api-access-4qgbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.319219 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "50a4333f-fd95-41a0-9ac8-4c21f9000870" (UID: "50a4333f-fd95-41a0-9ac8-4c21f9000870"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408550 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408617 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408665 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408695 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408798 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408834 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408897 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408923 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.408990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409111 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409239 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409332 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409363 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qgbz\" (UniqueName: \"kubernetes.io/projected/50a4333f-fd95-41a0-9ac8-4c21f9000870-kube-api-access-4qgbz\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409384 4675 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50a4333f-fd95-41a0-9ac8-4c21f9000870-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409403 4675 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50a4333f-fd95-41a0-9ac8-4c21f9000870-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-bin\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.409517 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-log-socket\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-systemd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410480 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-ovn\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-etc-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410537 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-kubelet\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410569 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-slash\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-systemd-units\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410608 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-node-log\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-var-lib-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-run-openvswitch\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-run-netns\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-host-cni-netd\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.410696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-config\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.411381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovnkube-script-lib\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.412299 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-env-overrides\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.414032 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-ovn-node-metrics-cert\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.439690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjcs\" (UniqueName: \"kubernetes.io/projected/d32b06ce-9a59-45f2-96dd-3c8c7cf71845-kube-api-access-kxjcs\") pod \"ovnkube-node-j5cds\" (UID: \"d32b06ce-9a59-45f2-96dd-3c8c7cf71845\") " pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.514766 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:09 crc kubenswrapper[4675]: W0124 07:05:09.545685 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32b06ce_9a59_45f2_96dd_3c8c7cf71845.slice/crio-992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06 WatchSource:0}: Error finding container 992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06: Status 404 returned error can't find the container with id 992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.874950 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875903 4675 generic.go:334] "Generic (PLEG): container finished" podID="d32b06ce-9a59-45f2-96dd-3c8c7cf71845" containerID="28d55bfe8712002398d6bd88dcaa800ae0040b6fed4ddfae6b7d6f0582f3a179" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerDied","Data":"28d55bfe8712002398d6bd88dcaa800ae0040b6fed4ddfae6b7d6f0582f3a179"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.875959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"992ad3fe4d0c23fc55ba15cce80e0dbec199d3daa0fdeff46278a75790814b06"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.879272 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovnkube-controller/3.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-acl-logging/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883551 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vsnzs_50a4333f-fd95-41a0-9ac8-4c21f9000870/ovn-controller/0.log" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883907 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883935 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883942 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883951 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883960 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884042 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884061 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884067 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.883969 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" exitCode=0 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884134 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884150 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884169 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884241 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884255 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884155 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" exitCode=143 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884286 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884343 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884300 4675 generic.go:334] "Generic (PLEG): container finished" podID="50a4333f-fd95-41a0-9ac8-4c21f9000870" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" exitCode=143 Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884380 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884407 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884419 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884429 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884441 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884448 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884455 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884462 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884468 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884475 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884482 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884488 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884494 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884501 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884522 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884530 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884537 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884562 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884570 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884576 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884583 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884586 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884589 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884685 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884694 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884706 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vsnzs" event={"ID":"50a4333f-fd95-41a0-9ac8-4c21f9000870","Type":"ContainerDied","Data":"62a8cadede7a21145e681044886ca9386d55c6d70c06dc737ae9eedf6acff8c9"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884762 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884775 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884782 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884789 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884796 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884803 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884810 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884816 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884822 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.884829 4675 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.939158 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.974443 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.974605 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.982873 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vsnzs"] Jan 24 07:05:09 crc kubenswrapper[4675]: I0124 07:05:09.998683 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.011252 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.027253 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.040075 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.055234 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.067513 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.084469 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.098578 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099007 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099043 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099067 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099424 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099459 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099489 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.099752 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.099799 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100027 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100048 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100063 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100277 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100302 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100320 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100486 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100510 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100526 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100691 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100710 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100742 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.100914 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100933 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.100947 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.101132 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101157 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101172 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: E0124 07:05:10.101360 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101384 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101399 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101657 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101680 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101924 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.101944 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102194 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102221 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102491 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102512 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102681 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102698 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102876 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.102894 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103093 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103121 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103402 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103434 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103685 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103705 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103955 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.103972 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104180 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104199 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104396 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104414 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104621 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104640 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104865 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.104893 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105086 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105103 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105284 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105307 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105485 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105504 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105728 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105752 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105936 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.105966 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106184 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106205 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106698 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106735 4675 scope.go:117] "RemoveContainer" containerID="126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106904 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea"} err="failed to get container status \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": rpc error: code = NotFound desc = could not find container \"126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea\": container with ID starting with 126cca225f4e3e19e1a26663561ff29ea0d1f6ec16823a373b6dfc7eac4b2dea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.106953 4675 scope.go:117] "RemoveContainer" containerID="efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107585 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034"} err="failed to get container status \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": rpc error: code = NotFound desc = could not find container \"efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034\": container with ID starting with efd668a7b06d74c4d57f6159175b9eef2388ca6c6166f42001085a67e5b1e034 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107604 4675 scope.go:117] "RemoveContainer" containerID="51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107852 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c"} err="failed to get container status \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": rpc error: code = NotFound desc = could not find container \"51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c\": container with ID starting with 51147c8993e57df1bff252ee32660c075d3e8c19da74ace52a3ebc489be9e90c not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.107882 4675 scope.go:117] "RemoveContainer" containerID="c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108072 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365"} err="failed to get container status \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": rpc error: code = NotFound desc = could not find container \"c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365\": container with ID starting with c3655054fc503c5178c82efca8d2eb7edca87baffbf6babb09d86d1a4fd0c365 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108088 4675 scope.go:117] "RemoveContainer" containerID="3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108283 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93"} err="failed to get container status \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": rpc error: code = NotFound desc = could not find container \"3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93\": container with ID starting with 3c0e1d5209d11fc5bcda73e8f896e246af5fa6c64eca67a20cf883f9e4bebf93 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108301 4675 scope.go:117] "RemoveContainer" containerID="4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108495 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc"} err="failed to get container status \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": rpc error: code = NotFound desc = could not find container \"4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc\": container with ID starting with 4a310ad58c8547c7d7da8f4449051166527872edb2ab99b147674467d7c5d4bc not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108520 4675 scope.go:117] "RemoveContainer" containerID="8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108800 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea"} err="failed to get container status \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": rpc error: code = NotFound desc = could not find container \"8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea\": container with ID starting with 8308eabfcc6a55a0706a025aab3ccac848e3e7a7c4509cef3f9644b136ce53ea not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.108826 4675 scope.go:117] "RemoveContainer" containerID="38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109034 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382"} err="failed to get container status \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": rpc error: code = NotFound desc = could not find container \"38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382\": container with ID starting with 38081e184bf2fbd688d9f680aac2c76a528dff1bf7eea04a4a926c69c49e0382 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109054 4675 scope.go:117] "RemoveContainer" containerID="a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109276 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691"} err="failed to get container status \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": rpc error: code = NotFound desc = could not find container \"a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691\": container with ID starting with a319e0f0deba1885ecc22eb2ce08e5a845e703c9fb7763adc23e9cc143f97691 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109303 4675 scope.go:117] "RemoveContainer" containerID="7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.109516 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5"} err="failed to get container status \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": rpc error: code = NotFound desc = could not find container \"7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5\": container with ID starting with 7f0cd928d13f2231ce19d601cd090abe7c4d299c2febb1570884040b2f8c60c5 not found: ID does not exist" Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"32d1f53835f949c1219fb31e4eebd15aedc9248e4f0dcabf280e088c1e36bfc2"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"a2ecb1c37e8b54fdeea76b6616f56b47b9b82f72d373b2d4e1a58a50d673d6e8"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897587 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"cf39ffe8f239aca312d4d71d9d596fa9a343774a93fdb5551eafd26495c3e374"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897597 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"732d68c8d8a6059ad0930bba3f793e0f13c0a850488d53ff70735d1654f8fa0a"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"8a40a66ed85e20deb131dcd8e54d3cf4a102986830a2031ae646b58bac168321"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.897618 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"669c27148464b5679f1099cdaaca1aac10e58f6ef2d188ed747c6f38670574ae"} Jan 24 07:05:10 crc kubenswrapper[4675]: I0124 07:05:10.952355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a4333f-fd95-41a0-9ac8-4c21f9000870" path="/var/lib/kubelet/pods/50a4333f-fd95-41a0-9ac8-4c21f9000870/volumes" Jan 24 07:05:12 crc kubenswrapper[4675]: I0124 07:05:12.913045 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"07d70ccfcf72051dd4824cf8a3d5af7dd0f1a95ba7a79201917d1018950b322e"} Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.935524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" event={"ID":"d32b06ce-9a59-45f2-96dd-3c8c7cf71845","Type":"ContainerStarted","Data":"5aeff035facd893da23f740378eec371fb91507290668ebe30c1f36601784a31"} Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936340 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936361 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.936523 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.967883 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" podStartSLOduration=6.967863727 podStartE2EDuration="6.967863727s" podCreationTimestamp="2026-01-24 07:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:05:15.964818184 +0000 UTC m=+717.260923407" watchObservedRunningTime="2026-01-24 07:05:15.967863727 +0000 UTC m=+717.263968960" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.972652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:15 crc kubenswrapper[4675]: I0124 07:05:15.982049 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:19 crc kubenswrapper[4675]: I0124 07:05:19.942309 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:19 crc kubenswrapper[4675]: E0124 07:05:19.943171 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zx9ns_openshift-multus(61e129ca-c9dc-4375-b373-5eec702744bd)\"" pod="openshift-multus/multus-zx9ns" podUID="61e129ca-c9dc-4375-b373-5eec702744bd" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.237694 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.239472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.241386 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.251772 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344465 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344535 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.344626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446703 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446769 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.446842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.447503 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.447539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.466829 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: I0124 07:05:28.557171 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.590896 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591014 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591245 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:28 crc kubenswrapper[4675]: E0124 07:05:28.591349 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(bf5c3326320d96db43098edc55efc69e0d54ce0abbfac153e78000003406959b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" Jan 24 07:05:29 crc kubenswrapper[4675]: I0124 07:05:29.029166 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: I0124 07:05:29.029853 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.049964 4675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050023 4675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050046 4675 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:29 crc kubenswrapper[4675]: E0124 07:05:29.050090 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace(6a14a2ad-1879-4684-b69a-64e6bebf6424)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_openshift-marketplace_6a14a2ad-1879-4684-b69a-64e6bebf6424_0(958bcc7e7a3fd7c83240943ecdb524358189f25654de42c694f11d7a50d9d265): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" Jan 24 07:05:32 crc kubenswrapper[4675]: I0124 07:05:32.946765 4675 scope.go:117] "RemoveContainer" containerID="c0cb9a228a110e81324f7b918e71c835eddfd7522602f0110befbb680a1b112b" Jan 24 07:05:34 crc kubenswrapper[4675]: I0124 07:05:34.055073 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zx9ns_61e129ca-c9dc-4375-b373-5eec702744bd/kube-multus/2.log" Jan 24 07:05:34 crc kubenswrapper[4675]: I0124 07:05:34.055424 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zx9ns" event={"ID":"61e129ca-c9dc-4375-b373-5eec702744bd","Type":"ContainerStarted","Data":"4184088d4877e714961c97864e506bc2d3af178e0cb2be9b01953bb12d09d59e"} Jan 24 07:05:38 crc kubenswrapper[4675]: I0124 07:05:38.629465 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:05:38 crc kubenswrapper[4675]: I0124 07:05:38.629946 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:05:39 crc kubenswrapper[4675]: I0124 07:05:39.540791 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j5cds" Jan 24 07:05:42 crc kubenswrapper[4675]: I0124 07:05:42.942437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:42 crc kubenswrapper[4675]: I0124 07:05:42.943420 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:43 crc kubenswrapper[4675]: I0124 07:05:43.436676 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7"] Jan 24 07:05:43 crc kubenswrapper[4675]: W0124 07:05:43.447111 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a14a2ad_1879_4684_b69a_64e6bebf6424.slice/crio-98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8 WatchSource:0}: Error finding container 98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8: Status 404 returned error can't find the container with id 98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8 Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129302 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="4f6a97c6d78d429cb4ec7577903962e3f8536d297ac20945806a8956224a4cb9" exitCode=0 Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129646 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"4f6a97c6d78d429cb4ec7577903962e3f8536d297ac20945806a8956224a4cb9"} Jan 24 07:05:44 crc kubenswrapper[4675]: I0124 07:05:44.129687 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerStarted","Data":"98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8"} Jan 24 07:05:46 crc kubenswrapper[4675]: I0124 07:05:46.142321 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="9cc2e5d414d97844e8b7e3f510160406460833ce1b2a12f2b679a5ddb0c7ef9f" exitCode=0 Jan 24 07:05:46 crc kubenswrapper[4675]: I0124 07:05:46.142376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"9cc2e5d414d97844e8b7e3f510160406460833ce1b2a12f2b679a5ddb0c7ef9f"} Jan 24 07:05:47 crc kubenswrapper[4675]: I0124 07:05:47.153830 4675 generic.go:334] "Generic (PLEG): container finished" podID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerID="3bb88a6fd7112c2dfd38ae4a1a3632267eb02e47e5f3a30268c4dcce60f04cb5" exitCode=0 Jan 24 07:05:47 crc kubenswrapper[4675]: I0124 07:05:47.153903 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"3bb88a6fd7112c2dfd38ae4a1a3632267eb02e47e5f3a30268c4dcce60f04cb5"} Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.406085 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.517446 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") pod \"6a14a2ad-1879-4684-b69a-64e6bebf6424\" (UID: \"6a14a2ad-1879-4684-b69a-64e6bebf6424\") " Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.518753 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle" (OuterVolumeSpecName: "bundle") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.527001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9" (OuterVolumeSpecName: "kube-api-access-fsln9") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "kube-api-access-fsln9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.537871 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util" (OuterVolumeSpecName: "util") pod "6a14a2ad-1879-4684-b69a-64e6bebf6424" (UID: "6a14a2ad-1879-4684-b69a-64e6bebf6424"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619257 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsln9\" (UniqueName: \"kubernetes.io/projected/6a14a2ad-1879-4684-b69a-64e6bebf6424-kube-api-access-fsln9\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619310 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:48 crc kubenswrapper[4675]: I0124 07:05:48.619332 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a14a2ad-1879-4684-b69a-64e6bebf6424-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" event={"ID":"6a14a2ad-1879-4684-b69a-64e6bebf6424","Type":"ContainerDied","Data":"98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8"} Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189942 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98deb0be21b60b3a9289edfcf4b738e5957122111f719521950f6628fc30fdb8" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.189954 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7" Jan 24 07:05:49 crc kubenswrapper[4675]: I0124 07:05:49.238036 4675 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998189 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998801 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="util" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998817 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="util" Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998831 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="pull" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998839 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="pull" Jan 24 07:05:54 crc kubenswrapper[4675]: E0124 07:05:54.998851 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998859 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.998968 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a14a2ad-1879-4684-b69a-64e6bebf6424" containerName="extract" Jan 24 07:05:54 crc kubenswrapper[4675]: I0124 07:05:54.999473 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002177 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002405 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.002554 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-j9hkt" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.004322 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.056769 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.105542 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.121516 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjjw\" (UniqueName: \"kubernetes.io/projected/b344cabd-3dd6-4691-990b-045aaf4c622f-kube-api-access-5sjjw\") pod \"nmstate-operator-646758c888-dm24p\" (UID: \"b344cabd-3dd6-4691-990b-045aaf4c622f\") " pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.315849 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" Jan 24 07:05:55 crc kubenswrapper[4675]: I0124 07:05:55.516237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dm24p"] Jan 24 07:05:56 crc kubenswrapper[4675]: I0124 07:05:56.226963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" event={"ID":"b344cabd-3dd6-4691-990b-045aaf4c622f","Type":"ContainerStarted","Data":"e80042bebb2fa475b623cf2d88b10c10af35fde26e67979d92737a7f15e88e5b"} Jan 24 07:05:58 crc kubenswrapper[4675]: I0124 07:05:58.240141 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" event={"ID":"b344cabd-3dd6-4691-990b-045aaf4c622f","Type":"ContainerStarted","Data":"93cc2bf4ecde42a85f53b763a3efb7d54c8a771a3d936f169c6e1aa5ff7efe77"} Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.249302 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dm24p" podStartSLOduration=3.206494882 podStartE2EDuration="5.249282092s" podCreationTimestamp="2026-01-24 07:05:54 +0000 UTC" firstStartedPulling="2026-01-24 07:05:55.525488244 +0000 UTC m=+756.821593467" lastFinishedPulling="2026-01-24 07:05:57.568275454 +0000 UTC m=+758.864380677" observedRunningTime="2026-01-24 07:05:58.255995534 +0000 UTC m=+759.552100787" watchObservedRunningTime="2026-01-24 07:05:59.249282092 +0000 UTC m=+760.545387325" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.266593 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.268977 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.272447 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.273468 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.282786 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.292666 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.308262 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gw8vt" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.335115 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ljst6"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.335971 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.363852 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365480 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365508 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365553 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.365638 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466455 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-ovs-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-dbus-socket\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466578 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.466815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.467033 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c82b668-f857-4de6-a938-333a7e44591f-nmstate-lock\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.474824 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/469eb31f-c261-4d7f-8a12-c10ed969bd55-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.486444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgr4c\" (UniqueName: \"kubernetes.io/projected/469eb31f-c261-4d7f-8a12-c10ed969bd55-kube-api-access-dgr4c\") pod \"nmstate-webhook-8474b5b9d8-77dfm\" (UID: \"469eb31f-c261-4d7f-8a12-c10ed969bd55\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.490402 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqlx\" (UniqueName: \"kubernetes.io/projected/56a6d660-7a53-4b25-b4e4-3d3f97a67430-kube-api-access-dqqlx\") pod \"nmstate-metrics-54757c584b-c56d8\" (UID: \"56a6d660-7a53-4b25-b4e4-3d3f97a67430\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.491515 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.492150 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.493788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59pr\" (UniqueName: \"kubernetes.io/projected/8c82b668-f857-4de6-a938-333a7e44591f-kube-api-access-h59pr\") pod \"nmstate-handler-ljst6\" (UID: \"8c82b668-f857-4de6-a938-333a7e44591f\") " pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.495937 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.497714 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-95qzk" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.512071 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.517767 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567429 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567479 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.567532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.592382 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.612519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.652133 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677402 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.677436 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: E0124 07:05:59.677544 4675 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 24 07:05:59 crc kubenswrapper[4675]: E0124 07:05:59.677595 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert podName:b289d862-4851-4f88-9a5b-4bed8cd70bd8 nodeName:}" failed. No retries permitted until 2026-01-24 07:06:00.177577804 +0000 UTC m=+761.473683027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-szblh" (UID: "b289d862-4851-4f88-9a5b-4bed8cd70bd8") : secret "plugin-serving-cert" not found Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.678463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b289d862-4851-4f88-9a5b-4bed8cd70bd8-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.698646 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.699247 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.723572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6nq\" (UniqueName: \"kubernetes.io/projected/b289d862-4851-4f88-9a5b-4bed8cd70bd8-kube-api-access-8d6nq\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.727149 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.881928 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.882260 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.882287 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883058 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883124 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883166 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.883183 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.905542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-c56d8"] Jan 24 07:05:59 crc kubenswrapper[4675]: W0124 07:05:59.913814 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56a6d660_7a53_4b25_b4e4_3d3f97a67430.slice/crio-1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881 WatchSource:0}: Error finding container 1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881: Status 404 returned error can't find the container with id 1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881 Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.971585 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm"] Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984150 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984205 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984249 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984320 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.984381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.985398 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-oauth-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.985917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.987265 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-service-ca\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.987418 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-trusted-ca-bundle\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.990816 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-serving-cert\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:05:59 crc kubenswrapper[4675]: I0124 07:05:59.991075 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-console-oauth-config\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.003123 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8q5\" (UniqueName: \"kubernetes.io/projected/0d59c29c-83d5-481c-bc19-e0bb7acf8b75-kube-api-access-4m8q5\") pod \"console-cbdf5f797-nnh6k\" (UID: \"0d59c29c-83d5-481c-bc19-e0bb7acf8b75\") " pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.025077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.190591 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.194320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b289d862-4851-4f88-9a5b-4bed8cd70bd8-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-szblh\" (UID: \"b289d862-4851-4f88-9a5b-4bed8cd70bd8\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.197796 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cbdf5f797-nnh6k"] Jan 24 07:06:00 crc kubenswrapper[4675]: W0124 07:06:00.200504 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d59c29c_83d5_481c_bc19_e0bb7acf8b75.slice/crio-4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2 WatchSource:0}: Error finding container 4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2: Status 404 returned error can't find the container with id 4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2 Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.250026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljst6" event={"ID":"8c82b668-f857-4de6-a938-333a7e44591f","Type":"ContainerStarted","Data":"1ed2ebcc10564f7083d3d1fbcbfeee9e607764c4e45150c9cad5204ec5fe371a"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.253239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" event={"ID":"469eb31f-c261-4d7f-8a12-c10ed969bd55","Type":"ContainerStarted","Data":"6f5d82e3dcd8c7e1c8ffbdb333466eb1cfdb33888a9242a76f69da6c1899d2fa"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.255999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbdf5f797-nnh6k" event={"ID":"0d59c29c-83d5-481c-bc19-e0bb7acf8b75","Type":"ContainerStarted","Data":"4e6cd223d667417f51b5c7e7dbd87d7bdb3caec5f6da3dae3ae4067533c9d9b2"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.257054 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"1377524cf636e8f6d143a952530e98f389176861fd1b82a9ad828581564c9881"} Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.451618 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" Jan 24 07:06:00 crc kubenswrapper[4675]: I0124 07:06:00.673437 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh"] Jan 24 07:06:00 crc kubenswrapper[4675]: W0124 07:06:00.680017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb289d862_4851_4f88_9a5b_4bed8cd70bd8.slice/crio-5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455 WatchSource:0}: Error finding container 5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455: Status 404 returned error can't find the container with id 5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455 Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.265795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cbdf5f797-nnh6k" event={"ID":"0d59c29c-83d5-481c-bc19-e0bb7acf8b75","Type":"ContainerStarted","Data":"e4a7622f6e4eb44ef5754ff21466c99266d679e1006f7833d5365c94ef1d5985"} Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.269453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" event={"ID":"b289d862-4851-4f88-9a5b-4bed8cd70bd8","Type":"ContainerStarted","Data":"5f907246ea3ce984aa71314df71bbd303fea3e75635c4a15ef5c4bf85d464455"} Jan 24 07:06:01 crc kubenswrapper[4675]: I0124 07:06:01.288672 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cbdf5f797-nnh6k" podStartSLOduration=2.288651831 podStartE2EDuration="2.288651831s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:06:01.285773622 +0000 UTC m=+762.581878875" watchObservedRunningTime="2026-01-24 07:06:01.288651831 +0000 UTC m=+762.584757054" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.282658 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" event={"ID":"b289d862-4851-4f88-9a5b-4bed8cd70bd8","Type":"ContainerStarted","Data":"e17a7392a9ee8fad75e776d2951ae1b6298562f9d3b6459ece24cd0935e32fac"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.285659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"4d16f7493074f5a11f41f156db582a7db56f8501907befe873d2395a86c85355"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.286984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljst6" event={"ID":"8c82b668-f857-4de6-a938-333a7e44591f","Type":"ContainerStarted","Data":"c7f88f371c5e75739d4c674c37c8c6835a48ff0036fc48e77724a8426988b56a"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.287107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.288313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" event={"ID":"469eb31f-c261-4d7f-8a12-c10ed969bd55","Type":"ContainerStarted","Data":"2204e60210d282b694b456e044d53d4943e4d000d7bd0546023bb2fbec57f988"} Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.288475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.299677 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-szblh" podStartSLOduration=2.015700271 podStartE2EDuration="4.299654512s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:06:00.681966668 +0000 UTC m=+761.978071891" lastFinishedPulling="2026-01-24 07:06:02.965920909 +0000 UTC m=+764.262026132" observedRunningTime="2026-01-24 07:06:03.297779357 +0000 UTC m=+764.593884580" watchObservedRunningTime="2026-01-24 07:06:03.299654512 +0000 UTC m=+764.595759735" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.357219 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" podStartSLOduration=1.311132488 podStartE2EDuration="4.357203397s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.973863631 +0000 UTC m=+761.269968854" lastFinishedPulling="2026-01-24 07:06:03.01993452 +0000 UTC m=+764.316039763" observedRunningTime="2026-01-24 07:06:03.353444307 +0000 UTC m=+764.649549530" watchObservedRunningTime="2026-01-24 07:06:03.357203397 +0000 UTC m=+764.653308620" Jan 24 07:06:03 crc kubenswrapper[4675]: I0124 07:06:03.358151 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ljst6" podStartSLOduration=1.131837236 podStartE2EDuration="4.35814388s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.738002977 +0000 UTC m=+761.034108200" lastFinishedPulling="2026-01-24 07:06:02.964309611 +0000 UTC m=+764.260414844" observedRunningTime="2026-01-24 07:06:03.326915743 +0000 UTC m=+764.623020966" watchObservedRunningTime="2026-01-24 07:06:03.35814388 +0000 UTC m=+764.654249123" Jan 24 07:06:06 crc kubenswrapper[4675]: I0124 07:06:06.351970 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" event={"ID":"56a6d660-7a53-4b25-b4e4-3d3f97a67430","Type":"ContainerStarted","Data":"90567a0c179975879e6cd00341195750d2d56cdff6c343f397abb1fedf9255e2"} Jan 24 07:06:06 crc kubenswrapper[4675]: I0124 07:06:06.371147 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-c56d8" podStartSLOduration=1.968620694 podStartE2EDuration="7.371122476s" podCreationTimestamp="2026-01-24 07:05:59 +0000 UTC" firstStartedPulling="2026-01-24 07:05:59.916382748 +0000 UTC m=+761.212487961" lastFinishedPulling="2026-01-24 07:06:05.31888452 +0000 UTC m=+766.614989743" observedRunningTime="2026-01-24 07:06:06.368860363 +0000 UTC m=+767.664965636" watchObservedRunningTime="2026-01-24 07:06:06.371122476 +0000 UTC m=+767.667227719" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.690428 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.691741 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.711225 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.781895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.883393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.883496 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.884347 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.885179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.885254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:07 crc kubenswrapper[4675]: I0124 07:06:07.904762 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"certified-operators-v27xv\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.017583 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.339611 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:08 crc kubenswrapper[4675]: W0124 07:06:08.344900 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d389362_a217_4c05_9d80_ed31811768dc.slice/crio-fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb WatchSource:0}: Error finding container fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb: Status 404 returned error can't find the container with id fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.366686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb"} Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630387 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630685 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.630753 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.631308 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:06:08 crc kubenswrapper[4675]: I0124 07:06:08.631364 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" gracePeriod=600 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.373493 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2" exitCode=0 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.373573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378753 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" exitCode=0 Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378793 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378825 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.378881 4675 scope.go:117] "RemoveContainer" containerID="ebaeb609c0074454fae3a07713a0c14f928ac8324d172f12c2024146e541ed58" Jan 24 07:06:09 crc kubenswrapper[4675]: I0124 07:06:09.677838 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ljst6" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.025253 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.025525 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.030729 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.389412 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99"} Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.406392 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cbdf5f797-nnh6k" Jan 24 07:06:10 crc kubenswrapper[4675]: I0124 07:06:10.465005 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.409343 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99" exitCode=0 Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.410011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99"} Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.410063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerStarted","Data":"4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26"} Jan 24 07:06:11 crc kubenswrapper[4675]: I0124 07:06:11.439403 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v27xv" podStartSLOduration=3.055744939 podStartE2EDuration="4.439351322s" podCreationTimestamp="2026-01-24 07:06:07 +0000 UTC" firstStartedPulling="2026-01-24 07:06:09.378306475 +0000 UTC m=+770.674411698" lastFinishedPulling="2026-01-24 07:06:10.761912858 +0000 UTC m=+772.058018081" observedRunningTime="2026-01-24 07:06:11.437667002 +0000 UTC m=+772.733772225" watchObservedRunningTime="2026-01-24 07:06:11.439351322 +0000 UTC m=+772.735456545" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.879927 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.883883 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.888900 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:15 crc kubenswrapper[4675]: I0124 07:06:15.910880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012660 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.012794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.013066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.030897 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"redhat-operators-jvpcf\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.208231 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:16 crc kubenswrapper[4675]: I0124 07:06:16.643700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445222 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2" exitCode=0 Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2"} Jan 24 07:06:17 crc kubenswrapper[4675]: I0124 07:06:17.445459 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"c409d86ef591245d15fd8a59452d9dc84cc21a8a336c9225e59ad1d8785554a8"} Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.017962 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.018159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.070752 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.455138 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99"} Jan 24 07:06:18 crc kubenswrapper[4675]: I0124 07:06:18.510912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.463829 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99" exitCode=0 Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.463918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99"} Jan 24 07:06:19 crc kubenswrapper[4675]: I0124 07:06:19.618799 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-77dfm" Jan 24 07:06:20 crc kubenswrapper[4675]: I0124 07:06:20.472149 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerStarted","Data":"df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f"} Jan 24 07:06:20 crc kubenswrapper[4675]: I0124 07:06:20.494109 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvpcf" podStartSLOduration=3.088694909 podStartE2EDuration="5.494092052s" podCreationTimestamp="2026-01-24 07:06:15 +0000 UTC" firstStartedPulling="2026-01-24 07:06:17.44748943 +0000 UTC m=+778.743594653" lastFinishedPulling="2026-01-24 07:06:19.852886573 +0000 UTC m=+781.148991796" observedRunningTime="2026-01-24 07:06:20.492233038 +0000 UTC m=+781.788338261" watchObservedRunningTime="2026-01-24 07:06:20.494092052 +0000 UTC m=+781.790197275" Jan 24 07:06:21 crc kubenswrapper[4675]: I0124 07:06:21.672644 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:21 crc kubenswrapper[4675]: I0124 07:06:21.672901 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v27xv" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" containerID="cri-o://4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" gracePeriod=2 Jan 24 07:06:24 crc kubenswrapper[4675]: I0124 07:06:24.493266 4675 generic.go:334] "Generic (PLEG): container finished" podID="7d389362-a217-4c05-9d80-ed31811768dc" containerID="4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" exitCode=0 Jan 24 07:06:24 crc kubenswrapper[4675]: I0124 07:06:24.493353 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26"} Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.287666 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.288778 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345233 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.345364 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.354553 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446690 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.446874 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.447331 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.447465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.469962 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.470376 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"redhat-marketplace-qsnl7\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v27xv" event={"ID":"7d389362-a217-4c05-9d80-ed31811768dc","Type":"ContainerDied","Data":"fa15feb38b16773851020c7922510f5f3d3e5af4863f6130648f5b5ef270dbdb"} Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500139 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v27xv" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.500149 4675 scope.go:117] "RemoveContainer" containerID="4e9e6f8c749f2e988430076a77477f784428b97da668e3b7be0c788942d51e26" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.530671 4675 scope.go:117] "RemoveContainer" containerID="14bdb8046df62e42fedba183978f222d84eb3b834d5444fbbef0c68556721d99" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548308 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.548341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") pod \"7d389362-a217-4c05-9d80-ed31811768dc\" (UID: \"7d389362-a217-4c05-9d80-ed31811768dc\") " Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.549814 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities" (OuterVolumeSpecName: "utilities") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.560537 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm" (OuterVolumeSpecName: "kube-api-access-6qvvm") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "kube-api-access-6qvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.564897 4675 scope.go:117] "RemoveContainer" containerID="e194927f2a604d7b81f11294d797a1da906415ab50991bcb93caea8c2a7657e2" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.607668 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.647852 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d389362-a217-4c05-9d80-ed31811768dc" (UID: "7d389362-a217-4c05-9d80-ed31811768dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651354 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651393 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d389362-a217-4c05-9d80-ed31811768dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.651408 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qvvm\" (UniqueName: \"kubernetes.io/projected/7d389362-a217-4c05-9d80-ed31811768dc-kube-api-access-6qvvm\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.839848 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.845289 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v27xv"] Jan 24 07:06:25 crc kubenswrapper[4675]: I0124 07:06:25.890916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.210271 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.211275 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.246693 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.507912 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"3a6b939ba9b0f49f20dbbe8a4b746d440813cfcb7dc60015231946094cb0835d"} Jan 24 07:06:26 crc kubenswrapper[4675]: I0124 07:06:26.568893 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:27 crc kubenswrapper[4675]: I0124 07:06:27.000059 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d389362-a217-4c05-9d80-ed31811768dc" path="/var/lib/kubelet/pods/7d389362-a217-4c05-9d80-ed31811768dc/volumes" Jan 24 07:06:29 crc kubenswrapper[4675]: I0124 07:06:29.528810 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" exitCode=0 Jan 24 07:06:29 crc kubenswrapper[4675]: I0124 07:06:29.529153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6"} Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.274500 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.275185 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvpcf" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" containerID="cri-o://df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" gracePeriod=2 Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.548891 4675 generic.go:334] "Generic (PLEG): container finished" podID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerID="df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" exitCode=0 Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.549031 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f"} Jan 24 07:06:30 crc kubenswrapper[4675]: I0124 07:06:30.553113 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.122670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151714 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.151937 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") pod \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\" (UID: \"34d5d6a5-fbe7-4f14-a530-8b78604a61a3\") " Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.152781 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities" (OuterVolumeSpecName: "utilities") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.157325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl" (OuterVolumeSpecName: "kube-api-access-g4hpl") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "kube-api-access-g4hpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.253392 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hpl\" (UniqueName: \"kubernetes.io/projected/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-kube-api-access-g4hpl\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.253426 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.267525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34d5d6a5-fbe7-4f14-a530-8b78604a61a3" (UID: "34d5d6a5-fbe7-4f14-a530-8b78604a61a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.354889 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d5d6a5-fbe7-4f14-a530-8b78604a61a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.561026 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" exitCode=0 Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.561080 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvpcf" event={"ID":"34d5d6a5-fbe7-4f14-a530-8b78604a61a3","Type":"ContainerDied","Data":"c409d86ef591245d15fd8a59452d9dc84cc21a8a336c9225e59ad1d8785554a8"} Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564328 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvpcf" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.564337 4675 scope.go:117] "RemoveContainer" containerID="df4378cbe34ce1ff1d9b23ad146643fab41006a05293fb890a58de7617cc3b7f" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.596889 4675 scope.go:117] "RemoveContainer" containerID="edde818446df1a2eddfa227f57318e821f8f9185b605f45d799771d126c45a99" Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.612032 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.615748 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvpcf"] Jan 24 07:06:31 crc kubenswrapper[4675]: I0124 07:06:31.636370 4675 scope.go:117] "RemoveContainer" containerID="1d98d86bba040151e8b910b0f27ae9bbdfed1abab02298643071d56d97f908e2" Jan 24 07:06:32 crc kubenswrapper[4675]: I0124 07:06:32.574542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerStarted","Data":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} Jan 24 07:06:32 crc kubenswrapper[4675]: I0124 07:06:32.950008 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" path="/var/lib/kubelet/pods/34d5d6a5-fbe7-4f14-a530-8b78604a61a3/volumes" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.515192 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c64jl" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" containerID="cri-o://3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" gracePeriod=15 Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.608924 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.608992 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.670332 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.698606 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsnl7" podStartSLOduration=8.232331873 podStartE2EDuration="10.698585577s" podCreationTimestamp="2026-01-24 07:06:25 +0000 UTC" firstStartedPulling="2026-01-24 07:06:29.530904836 +0000 UTC m=+790.827010109" lastFinishedPulling="2026-01-24 07:06:31.9971586 +0000 UTC m=+793.293263813" observedRunningTime="2026-01-24 07:06:32.599948373 +0000 UTC m=+793.896053626" watchObservedRunningTime="2026-01-24 07:06:35.698585577 +0000 UTC m=+796.994690800" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.942624 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c64jl_c66b0b0f-0581-49e6-bfa7-548678ab6de8/console/0.log" Jan 24 07:06:35 crc kubenswrapper[4675]: I0124 07:06:35.944766 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020291 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020404 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020426 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020443 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020491 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.020514 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") pod \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\" (UID: \"c66b0b0f-0581-49e6-bfa7-548678ab6de8\") " Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.027576 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.027755 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config" (OuterVolumeSpecName: "console-config") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.028218 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.030531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca" (OuterVolumeSpecName: "service-ca") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.038566 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.047354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25" (OuterVolumeSpecName: "kube-api-access-95m25") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "kube-api-access-95m25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.050149 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c66b0b0f-0581-49e6-bfa7-548678ab6de8" (UID: "c66b0b0f-0581-49e6-bfa7-548678ab6de8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122489 4675 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122856 4675 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122872 4675 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122884 4675 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122898 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95m25\" (UniqueName: \"kubernetes.io/projected/c66b0b0f-0581-49e6-bfa7-548678ab6de8-kube-api-access-95m25\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122910 4675 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66b0b0f-0581-49e6-bfa7-548678ab6de8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.122923 4675 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c66b0b0f-0581-49e6-bfa7-548678ab6de8-service-ca\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602039 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c64jl_c66b0b0f-0581-49e6-bfa7-548678ab6de8/console/0.log" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602117 4675 generic.go:334] "Generic (PLEG): container finished" podID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" exitCode=2 Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerDied","Data":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602186 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c64jl" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602237 4675 scope.go:117] "RemoveContainer" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.602226 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c64jl" event={"ID":"c66b0b0f-0581-49e6-bfa7-548678ab6de8","Type":"ContainerDied","Data":"9f62761dfa0e23278a88b4c9d7acb6c23e771672906712e8cf7b32e35ec90e90"} Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.620530 4675 scope.go:117] "RemoveContainer" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.620966 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": container with ID starting with 3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663 not found: ID does not exist" containerID="3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.621005 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663"} err="failed to get container status \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": rpc error: code = NotFound desc = could not find container \"3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663\": container with ID starting with 3b5ee1f01456a50cbfe74b66e1b8962dbadd3401abd4001129cf571bde1db663 not found: ID does not exist" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.633778 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.639547 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c64jl"] Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.938547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.938948 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.938970 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.938989 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939004 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939026 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939039 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-content" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939070 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939088 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939117 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939133 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939162 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939178 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: E0124 07:06:36.939197 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939210 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="extract-utilities" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939423 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d5d6a5-fbe7-4f14-a530-8b78604a61a3" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939449 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" containerName="console" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.939480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d389362-a217-4c05-9d80-ed31811768dc" containerName="registry-server" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.940973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.944119 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.950146 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66b0b0f-0581-49e6-bfa7-548678ab6de8" path="/var/lib/kubelet/pods/c66b0b0f-0581-49e6-bfa7-548678ab6de8/volumes" Jan 24 07:06:36 crc kubenswrapper[4675]: I0124 07:06:36.950834 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040361 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040452 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.040485 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.141890 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142003 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.142603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.159490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.261672 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.449141 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc"] Jan 24 07:06:37 crc kubenswrapper[4675]: W0124 07:06:37.462960 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a17869_4316_441a_ba35_dc9c1660b966.slice/crio-74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0 WatchSource:0}: Error finding container 74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0: Status 404 returned error can't find the container with id 74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0 Jan 24 07:06:37 crc kubenswrapper[4675]: I0124 07:06:37.609049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerStarted","Data":"74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0"} Jan 24 07:06:38 crc kubenswrapper[4675]: I0124 07:06:38.620390 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="fe323b02c4e6e39780da2f5799519f1b1a30682395decc4ac60a5930313e5ebf" exitCode=0 Jan 24 07:06:38 crc kubenswrapper[4675]: I0124 07:06:38.620802 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"fe323b02c4e6e39780da2f5799519f1b1a30682395decc4ac60a5930313e5ebf"} Jan 24 07:06:41 crc kubenswrapper[4675]: I0124 07:06:41.655056 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="513678d78adb6113c117cac2ca6b3799e554a164b11057d772bdf64986771363" exitCode=0 Jan 24 07:06:41 crc kubenswrapper[4675]: I0124 07:06:41.655119 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"513678d78adb6113c117cac2ca6b3799e554a164b11057d772bdf64986771363"} Jan 24 07:06:42 crc kubenswrapper[4675]: I0124 07:06:42.662031 4675 generic.go:334] "Generic (PLEG): container finished" podID="55a17869-4316-441a-ba35-dc9c1660b966" containerID="9a1e6c728b0cd2692615eb66d6293f36556f6b94d0544479444ea681d67e49ff" exitCode=0 Jan 24 07:06:42 crc kubenswrapper[4675]: I0124 07:06:42.662079 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"9a1e6c728b0cd2692615eb66d6293f36556f6b94d0544479444ea681d67e49ff"} Jan 24 07:06:43 crc kubenswrapper[4675]: I0124 07:06:43.915840 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035044 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.035226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") pod \"55a17869-4316-441a-ba35-dc9c1660b966\" (UID: \"55a17869-4316-441a-ba35-dc9c1660b966\") " Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.036443 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle" (OuterVolumeSpecName: "bundle") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.042078 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt" (OuterVolumeSpecName: "kube-api-access-5wzxt") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "kube-api-access-5wzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.047232 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util" (OuterVolumeSpecName: "util") pod "55a17869-4316-441a-ba35-dc9c1660b966" (UID: "55a17869-4316-441a-ba35-dc9c1660b966"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136893 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136959 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wzxt\" (UniqueName: \"kubernetes.io/projected/55a17869-4316-441a-ba35-dc9c1660b966-kube-api-access-5wzxt\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.136991 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55a17869-4316-441a-ba35-dc9c1660b966-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697123 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" event={"ID":"55a17869-4316-441a-ba35-dc9c1660b966","Type":"ContainerDied","Data":"74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0"} Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697425 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ea7a4ad0c636e54a871784c598d6b8f67956cccf33cfa4081b0d1993ac96c0" Jan 24 07:06:44 crc kubenswrapper[4675]: I0124 07:06:44.697268 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290136 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290448 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="pull" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290467 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="pull" Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290491 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290504 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: E0124 07:06:45.290539 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="util" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290551 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="util" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.290712 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a17869-4316-441a-ba35-dc9c1660b966" containerName="extract" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.292834 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.299890 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351257 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.351284 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452344 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.452869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.453164 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.472933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"community-operators-r2zwn\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.635277 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.686849 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:45 crc kubenswrapper[4675]: I0124 07:06:45.992144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.712832 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" exitCode=0 Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.712948 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492"} Jan 24 07:06:46 crc kubenswrapper[4675]: I0124 07:06:46.713203 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"50d2143d6a15ce1effec7497d78dc551e5d67a35354b8365b053d80281a18399"} Jan 24 07:06:47 crc kubenswrapper[4675]: I0124 07:06:47.720860 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} Jan 24 07:06:48 crc kubenswrapper[4675]: I0124 07:06:48.728869 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" exitCode=0 Jan 24 07:06:48 crc kubenswrapper[4675]: I0124 07:06:48.728964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} Jan 24 07:06:49 crc kubenswrapper[4675]: I0124 07:06:49.736609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerStarted","Data":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} Jan 24 07:06:49 crc kubenswrapper[4675]: I0124 07:06:49.756301 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r2zwn" podStartSLOduration=2.033364793 podStartE2EDuration="4.756279907s" podCreationTimestamp="2026-01-24 07:06:45 +0000 UTC" firstStartedPulling="2026-01-24 07:06:46.714963654 +0000 UTC m=+808.011068917" lastFinishedPulling="2026-01-24 07:06:49.437878818 +0000 UTC m=+810.733984031" observedRunningTime="2026-01-24 07:06:49.755265773 +0000 UTC m=+811.051370996" watchObservedRunningTime="2026-01-24 07:06:49.756279907 +0000 UTC m=+811.052385130" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.271577 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.271801 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsnl7" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" containerID="cri-o://b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" gracePeriod=2 Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.659617 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.718614 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") pod \"4941a74b-f8db-4960-a1d7-7585b2099620\" (UID: \"4941a74b-f8db-4960-a1d7-7585b2099620\") " Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.727432 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities" (OuterVolumeSpecName: "utilities") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.736913 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8" (OuterVolumeSpecName: "kube-api-access-5wpk8") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "kube-api-access-5wpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.750092 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4941a74b-f8db-4960-a1d7-7585b2099620" (UID: "4941a74b-f8db-4960-a1d7-7585b2099620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.754497 4675 generic.go:334] "Generic (PLEG): container finished" podID="4941a74b-f8db-4960-a1d7-7585b2099620" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" exitCode=0 Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.755433 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsnl7" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.755963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.756021 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsnl7" event={"ID":"4941a74b-f8db-4960-a1d7-7585b2099620","Type":"ContainerDied","Data":"3a6b939ba9b0f49f20dbbe8a4b746d440813cfcb7dc60015231946094cb0835d"} Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.756044 4675 scope.go:117] "RemoveContainer" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.771564 4675 scope.go:117] "RemoveContainer" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.794331 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.797432 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsnl7"] Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.800245 4675 scope.go:117] "RemoveContainer" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.824832 4675 scope.go:117] "RemoveContainer" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825433 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825468 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wpk8\" (UniqueName: \"kubernetes.io/projected/4941a74b-f8db-4960-a1d7-7585b2099620-kube-api-access-5wpk8\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.825479 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4941a74b-f8db-4960-a1d7-7585b2099620-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.829649 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": container with ID starting with b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f not found: ID does not exist" containerID="b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.829712 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f"} err="failed to get container status \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": rpc error: code = NotFound desc = could not find container \"b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f\": container with ID starting with b4e5691c13efa3a37153c43a99f4d103e72ab6e19e41209e55dac30459acc76f not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.829751 4675 scope.go:117] "RemoveContainer" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.830059 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": container with ID starting with 659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a not found: ID does not exist" containerID="659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830076 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a"} err="failed to get container status \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": rpc error: code = NotFound desc = could not find container \"659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a\": container with ID starting with 659f74d2f512d3826200551f75f53819cab5f7cabe0577185eeb1341534dd70a not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830090 4675 scope.go:117] "RemoveContainer" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: E0124 07:06:50.830279 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": container with ID starting with 7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6 not found: ID does not exist" containerID="7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.830296 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6"} err="failed to get container status \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": rpc error: code = NotFound desc = could not find container \"7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6\": container with ID starting with 7b073fb296e32024727f4ac941e7937f27dec72a39d13eb39bda4ce653e5ccb6 not found: ID does not exist" Jan 24 07:06:50 crc kubenswrapper[4675]: I0124 07:06:50.949133 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" path="/var/lib/kubelet/pods/4941a74b-f8db-4960-a1d7-7585b2099620/volumes" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.833911 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.836821 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-utilities" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-utilities" Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.838575 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-content" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838633 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="extract-content" Jan 24 07:06:52 crc kubenswrapper[4675]: E0124 07:06:52.838704 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.838785 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.839030 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4941a74b-f8db-4960-a1d7-7585b2099620" containerName="registry-server" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.839785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848052 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848140 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848320 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t8rpx" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848363 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.848471 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.854432 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.954940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.954984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:52 crc kubenswrapper[4675]: I0124 07:06:52.955037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.059807 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.060444 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061666 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.061765 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.062697 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.065815 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066680 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ww4hr" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-webhook-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.066939 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf0ee32-c416-4629-a441-268fbe054062-apiservice-cert\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.116537 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd7f\" (UniqueName: \"kubernetes.io/projected/0cf0ee32-c416-4629-a441-268fbe054062-kube-api-access-fvd7f\") pod \"metallb-operator-controller-manager-57d867674d-x4v6v\" (UID: \"0cf0ee32-c416-4629-a441-268fbe054062\") " pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.148250 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.153534 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.164689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266187 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266232 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.266263 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.270622 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-webhook-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.288270 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9pr\" (UniqueName: \"kubernetes.io/projected/893cbc8e-86ae-4910-8693-061301da0ba6-kube-api-access-cc9pr\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.290176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/893cbc8e-86ae-4910-8693-061301da0ba6-apiservice-cert\") pod \"metallb-operator-webhook-server-5f499b46f-tntmc\" (UID: \"893cbc8e-86ae-4910-8693-061301da0ba6\") " pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.405270 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.443140 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v"] Jan 24 07:06:53 crc kubenswrapper[4675]: W0124 07:06:53.456856 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf0ee32_c416_4629_a441_268fbe054062.slice/crio-a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab WatchSource:0}: Error finding container a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab: Status 404 returned error can't find the container with id a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.668525 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc"] Jan 24 07:06:53 crc kubenswrapper[4675]: W0124 07:06:53.676937 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod893cbc8e_86ae_4910_8693_061301da0ba6.slice/crio-ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac WatchSource:0}: Error finding container ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac: Status 404 returned error can't find the container with id ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.770197 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" event={"ID":"893cbc8e-86ae-4910-8693-061301da0ba6","Type":"ContainerStarted","Data":"ffa913848951fab816606a52607b4008615af0e1c44aab8be4007d3609ce72ac"} Jan 24 07:06:53 crc kubenswrapper[4675]: I0124 07:06:53.771058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" event={"ID":"0cf0ee32-c416-4629-a441-268fbe054062","Type":"ContainerStarted","Data":"a5dc44614e4c757b61b3b91b0f18b7ece1fb5b2b847465523adb05a3800d9bab"} Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.636890 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.637241 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.679606 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:55 crc kubenswrapper[4675]: I0124 07:06:55.822948 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:58 crc kubenswrapper[4675]: I0124 07:06:58.875114 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:58 crc kubenswrapper[4675]: I0124 07:06:58.875691 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r2zwn" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" containerID="cri-o://c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" gracePeriod=2 Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.502332 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542556 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542638 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.542681 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") pod \"6b44a7e8-d142-46b0-94fd-a0635212218a\" (UID: \"6b44a7e8-d142-46b0-94fd-a0635212218a\") " Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.543543 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities" (OuterVolumeSpecName: "utilities") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.556019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6" (OuterVolumeSpecName: "kube-api-access-r2mj6") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "kube-api-access-r2mj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.614256 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b44a7e8-d142-46b0-94fd-a0635212218a" (UID: "6b44a7e8-d142-46b0-94fd-a0635212218a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644942 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644969 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b44a7e8-d142-46b0-94fd-a0635212218a-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.644979 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2mj6\" (UniqueName: \"kubernetes.io/projected/6b44a7e8-d142-46b0-94fd-a0635212218a-kube-api-access-r2mj6\") on node \"crc\" DevicePath \"\"" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816176 4675 generic.go:334] "Generic (PLEG): container finished" podID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" exitCode=0 Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r2zwn" event={"ID":"6b44a7e8-d142-46b0-94fd-a0635212218a","Type":"ContainerDied","Data":"50d2143d6a15ce1effec7497d78dc551e5d67a35354b8365b053d80281a18399"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816295 4675 scope.go:117] "RemoveContainer" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.816459 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r2zwn" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.823362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" event={"ID":"893cbc8e-86ae-4910-8693-061301da0ba6","Type":"ContainerStarted","Data":"638dc5f054ef699396f1519ba591ea3455defd8fd3829bad716febfaa7b48cfb"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.823996 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.829319 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" event={"ID":"0cf0ee32-c416-4629-a441-268fbe054062","Type":"ContainerStarted","Data":"7ceefb2707b7849f8f522ef3b2d6e5cfc2a7824c7ccb4fc66f76b49f8b249d88"} Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.829764 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.835410 4675 scope.go:117] "RemoveContainer" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.848275 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" podStartSLOduration=0.943908259 podStartE2EDuration="6.848258689s" podCreationTimestamp="2026-01-24 07:06:53 +0000 UTC" firstStartedPulling="2026-01-24 07:06:53.683875597 +0000 UTC m=+814.979980820" lastFinishedPulling="2026-01-24 07:06:59.588226027 +0000 UTC m=+820.884331250" observedRunningTime="2026-01-24 07:06:59.846139368 +0000 UTC m=+821.142244591" watchObservedRunningTime="2026-01-24 07:06:59.848258689 +0000 UTC m=+821.144363912" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.865335 4675 scope.go:117] "RemoveContainer" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.888634 4675 scope.go:117] "RemoveContainer" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.890643 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": container with ID starting with c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e not found: ID does not exist" containerID="c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.890899 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e"} err="failed to get container status \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": rpc error: code = NotFound desc = could not find container \"c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e\": container with ID starting with c57a3fa7e848f5c7b934ca0de7eb3a542d23b7f4f23c14ad18be67651aeb555e not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.890937 4675 scope.go:117] "RemoveContainer" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.894695 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" podStartSLOduration=1.785445368 podStartE2EDuration="7.89467377s" podCreationTimestamp="2026-01-24 07:06:52 +0000 UTC" firstStartedPulling="2026-01-24 07:06:53.459531189 +0000 UTC m=+814.755636412" lastFinishedPulling="2026-01-24 07:06:59.568759591 +0000 UTC m=+820.864864814" observedRunningTime="2026-01-24 07:06:59.880626214 +0000 UTC m=+821.176731437" watchObservedRunningTime="2026-01-24 07:06:59.89467377 +0000 UTC m=+821.190778993" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.894989 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.895943 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": container with ID starting with 8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786 not found: ID does not exist" containerID="8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.895991 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786"} err="failed to get container status \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": rpc error: code = NotFound desc = could not find container \"8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786\": container with ID starting with 8e4fcaf5732efcf36aad912d9aeb772c787e8aa51f165fd95b136f483faee786 not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.896013 4675 scope.go:117] "RemoveContainer" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: E0124 07:06:59.896648 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": container with ID starting with 0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492 not found: ID does not exist" containerID="0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.896689 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492"} err="failed to get container status \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": rpc error: code = NotFound desc = could not find container \"0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492\": container with ID starting with 0e60f1c6f09260125c135d4876b513f719cc794c854660ae3ecaaeabd2326492 not found: ID does not exist" Jan 24 07:06:59 crc kubenswrapper[4675]: I0124 07:06:59.905626 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r2zwn"] Jan 24 07:07:00 crc kubenswrapper[4675]: I0124 07:07:00.948524 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" path="/var/lib/kubelet/pods/6b44a7e8-d142-46b0-94fd-a0635212218a/volumes" Jan 24 07:07:13 crc kubenswrapper[4675]: I0124 07:07:13.410504 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f499b46f-tntmc" Jan 24 07:07:33 crc kubenswrapper[4675]: I0124 07:07:33.158266 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-57d867674d-x4v6v" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028094 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-78f4w"] Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028311 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-content" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-content" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028339 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-utilities" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028345 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="extract-utilities" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.028355 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028361 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.028471 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b44a7e8-d142-46b0-94fd-a0635212218a" containerName="registry-server" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.030156 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038493 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038493 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.038567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2fwq2" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.042856 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.043518 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.045596 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097062 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097344 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097370 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097384 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097489 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097503 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097517 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.097542 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.101328 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.164567 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5bpc7"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.165373 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170657 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4r4bl" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170888 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.170948 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.171011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199450 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.199916 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200183 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200313 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200414 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200519 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.200626 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.201461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-sockets\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.201827 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.202003 4675 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.204530 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs podName:fa6ce697-eaf1-4412-a7ca-40a3eb3fa712 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.704511064 +0000 UTC m=+856.000616287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs") pod "frr-k8s-78f4w" (UID: "fa6ce697-eaf1-4412-a7ca-40a3eb3fa712") : secret "frr-k8s-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-conf\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202859 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-reloader\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.202494 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-frr-startup\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.220800 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.226320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxht\" (UniqueName: \"kubernetes.io/projected/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-kube-api-access-4nxht\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.238965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.241161 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vdmj\" (UniqueName: \"kubernetes.io/projected/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-kube-api-access-7vdmj\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.260044 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.270825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/032ac1eb-bb7f-4f94-b9ad-4d710032f3af-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-skd24\" (UID: \"032ac1eb-bb7f-4f94-b9ad-4d710032f3af\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.272796 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.301797 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.301988 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.302015 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.302060 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.402925 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403150 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403477 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.403501 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404462 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404593 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.904571995 +0000 UTC m=+856.200677298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404530 4675 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.404795 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:34.90478452 +0000 UTC m=+856.200889793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "speaker-certs-secret" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.405132 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metallb-excludel2\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.439312 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnnz\" (UniqueName: \"kubernetes.io/projected/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-kube-api-access-rdnnz\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506625 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.506678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.509112 4675 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.511016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-metrics-certs\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.520535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af8e6625-69ed-4901-9577-65cc6fafe0d1-cert\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.522810 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqntg\" (UniqueName: \"kubernetes.io/projected/af8e6625-69ed-4901-9577-65cc6fafe0d1-kube-api-access-fqntg\") pod \"controller-6968d8fdc4-c4k6t\" (UID: \"af8e6625-69ed-4901-9577-65cc6fafe0d1\") " pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.583351 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.592467 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.708225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.713935 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa6ce697-eaf1-4412-a7ca-40a3eb3fa712-metrics-certs\") pod \"frr-k8s-78f4w\" (UID: \"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712\") " pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.780474 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-c4k6t"] Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.911088 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.911251 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.911412 4675 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: E0124 07:07:34.911464 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist podName:21ad12ca-5157-4c19-9e8c-34fbe8fa9b96 nodeName:}" failed. No retries permitted until 2026-01-24 07:07:35.911448138 +0000 UTC m=+857.207553361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist") pod "speaker-5bpc7" (UID: "21ad12ca-5157-4c19-9e8c-34fbe8fa9b96") : secret "metallb-memberlist" not found Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.916175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-metrics-certs\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:34 crc kubenswrapper[4675]: I0124 07:07:34.956010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.014444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" event={"ID":"032ac1eb-bb7f-4f94-b9ad-4d710032f3af","Type":"ContainerStarted","Data":"94e750f0ae6ac9183645c58f6564c0ea448569d3238fb8d39e9d25945e7d1eea"} Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.015659 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"859429b026aa754fd934437ae19b3294ff2811182303dc413e7149dd6cc66f83"} Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.921548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:35 crc kubenswrapper[4675]: I0124 07:07:35.944451 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21ad12ca-5157-4c19-9e8c-34fbe8fa9b96-memberlist\") pod \"speaker-5bpc7\" (UID: \"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96\") " pod="metallb-system/speaker-5bpc7" Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.001283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.023855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"109edcdaadd0f45cbe2a7b641e11bb2f039e60332933aeb9788df1103d845e80"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.027618 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"bb953cbb3571d7fa20935f0c4d699e092a6f104f7399328ac2ed6f5cffde3044"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.027643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-c4k6t" event={"ID":"af8e6625-69ed-4901-9577-65cc6fafe0d1","Type":"ContainerStarted","Data":"34095ca9ec088ddeb46c70be8ddb7e066479273835bd505a0f9173566f85e598"} Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.028059 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:36 crc kubenswrapper[4675]: W0124 07:07:36.035278 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ad12ca_5157_4c19_9e8c_34fbe8fa9b96.slice/crio-02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8 WatchSource:0}: Error finding container 02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8: Status 404 returned error can't find the container with id 02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8 Jan 24 07:07:36 crc kubenswrapper[4675]: I0124 07:07:36.049913 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-c4k6t" podStartSLOduration=2.04989579 podStartE2EDuration="2.04989579s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:07:36.049302196 +0000 UTC m=+857.345407449" watchObservedRunningTime="2026-01-24 07:07:36.04989579 +0000 UTC m=+857.346001013" Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"5b364002e9ca948df1c59004b1639fd758859def43c15bb386297b5e3f2a996f"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043906 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"55c10003d6d8bcae0786761c23428ce104ebec925a5c8d52fec1d952347f5e79"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.043923 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5bpc7" event={"ID":"21ad12ca-5157-4c19-9e8c-34fbe8fa9b96","Type":"ContainerStarted","Data":"02e6994e7e67b63dc14eaaf10e48ef1422940589c43fc732acaf6647d51c2ea8"} Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.044109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:37 crc kubenswrapper[4675]: I0124 07:07:37.065424 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5bpc7" podStartSLOduration=3.065407891 podStartE2EDuration="3.065407891s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:07:37.063779391 +0000 UTC m=+858.359884614" watchObservedRunningTime="2026-01-24 07:07:37.065407891 +0000 UTC m=+858.361513114" Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.089535 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" event={"ID":"032ac1eb-bb7f-4f94-b9ad-4d710032f3af","Type":"ContainerStarted","Data":"7c819efc4f465b4a5612228aff471cdf1de4b4b3668452765fee0de53b7202d0"} Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.090052 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.091681 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="4781b14d9af67c3c9982c74fcf04c706d2e84d50ebdaa1a49f08ee784de4c9ee" exitCode=0 Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.091764 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"4781b14d9af67c3c9982c74fcf04c706d2e84d50ebdaa1a49f08ee784de4c9ee"} Jan 24 07:07:43 crc kubenswrapper[4675]: I0124 07:07:43.106314 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" podStartSLOduration=0.911545931 podStartE2EDuration="9.106293881s" podCreationTimestamp="2026-01-24 07:07:34 +0000 UTC" firstStartedPulling="2026-01-24 07:07:34.6073201 +0000 UTC m=+855.903425323" lastFinishedPulling="2026-01-24 07:07:42.80206805 +0000 UTC m=+864.098173273" observedRunningTime="2026-01-24 07:07:43.10378953 +0000 UTC m=+864.399894753" watchObservedRunningTime="2026-01-24 07:07:43.106293881 +0000 UTC m=+864.402399104" Jan 24 07:07:44 crc kubenswrapper[4675]: I0124 07:07:44.096921 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="46240cca07b659f78a621d75df937a4dc1e26462af437c89664affcaccb6f475" exitCode=0 Jan 24 07:07:44 crc kubenswrapper[4675]: I0124 07:07:44.097022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"46240cca07b659f78a621d75df937a4dc1e26462af437c89664affcaccb6f475"} Jan 24 07:07:45 crc kubenswrapper[4675]: I0124 07:07:45.107528 4675 generic.go:334] "Generic (PLEG): container finished" podID="fa6ce697-eaf1-4412-a7ca-40a3eb3fa712" containerID="faf4eefce35793e912ce4bccc29429eefc37468c976a2605c95dc2abe15c1876" exitCode=0 Jan 24 07:07:45 crc kubenswrapper[4675]: I0124 07:07:45.107579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerDied","Data":"faf4eefce35793e912ce4bccc29429eefc37468c976a2605c95dc2abe15c1876"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.019272 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5bpc7" Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"edde50c157a4941b24dc7f1026ab713c5763b8fd9fb27b5bb6af374e73e46792"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118813 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"c3705ce7d4e8946bed2f0e7864dd24b868f8b40871b481425e99de323739964b"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118826 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"cd55267f82d6d766124fd4c2c0b3e9263d6c540a69e99a2d2202d3cf527e87e9"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118837 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"4b935eb6d368ca0ddc907282f9ca4a4064cc425eb85524db469f8f280af7406d"} Jan 24 07:07:46 crc kubenswrapper[4675]: I0124 07:07:46.118849 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"28b2ee528d24d172f73674e78a5096babe54dc85c899e425d5f7a6ac811fa758"} Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.126784 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-78f4w" event={"ID":"fa6ce697-eaf1-4412-a7ca-40a3eb3fa712","Type":"ContainerStarted","Data":"38e96173b01ba929ed5758ad5d421b8deda1a6baf8fe8778bc1ee19647c1de55"} Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.127039 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:47 crc kubenswrapper[4675]: I0124 07:07:47.151268 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-78f4w" podStartSLOduration=7.0629863 podStartE2EDuration="14.151251618s" podCreationTimestamp="2026-01-24 07:07:33 +0000 UTC" firstStartedPulling="2026-01-24 07:07:35.73156076 +0000 UTC m=+857.027666023" lastFinishedPulling="2026-01-24 07:07:42.819826118 +0000 UTC m=+864.115931341" observedRunningTime="2026-01-24 07:07:47.147568869 +0000 UTC m=+868.443674082" watchObservedRunningTime="2026-01-24 07:07:47.151251618 +0000 UTC m=+868.447356841" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.432891 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.433587 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.436189 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-js4nb" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.436205 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.448004 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.448948 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.502014 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.603382 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.629931 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"openstack-operator-index-vd22g\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.765513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.957031 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:49 crc kubenswrapper[4675]: I0124 07:07:49.968604 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:50 crc kubenswrapper[4675]: I0124 07:07:50.024277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:07:50 crc kubenswrapper[4675]: I0124 07:07:50.145883 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerStarted","Data":"65792c9a12bd755b36506910452a8b4e058f9cc6c39651f649e047267f59c4e5"} Jan 24 07:07:52 crc kubenswrapper[4675]: I0124 07:07:52.820966 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.165232 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerStarted","Data":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.182887 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vd22g" podStartSLOduration=1.8336292680000001 podStartE2EDuration="4.182869485s" podCreationTimestamp="2026-01-24 07:07:49 +0000 UTC" firstStartedPulling="2026-01-24 07:07:50.00037676 +0000 UTC m=+871.296481993" lastFinishedPulling="2026-01-24 07:07:52.349616967 +0000 UTC m=+873.645722210" observedRunningTime="2026-01-24 07:07:53.181992993 +0000 UTC m=+874.478098236" watchObservedRunningTime="2026-01-24 07:07:53.182869485 +0000 UTC m=+874.478974698" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.421056 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.422157 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.436997 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.452901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.554498 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.574157 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kwkd\" (UniqueName: \"kubernetes.io/projected/954076ba-3e6f-4e5b-9b3f-4637840d5021-kube-api-access-4kwkd\") pod \"openstack-operator-index-d4hsh\" (UID: \"954076ba-3e6f-4e5b-9b3f-4637840d5021\") " pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:53 crc kubenswrapper[4675]: I0124 07:07:53.736896 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.105698 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d4hsh"] Jan 24 07:07:54 crc kubenswrapper[4675]: W0124 07:07:54.109248 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954076ba_3e6f_4e5b_9b3f_4637840d5021.slice/crio-26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e WatchSource:0}: Error finding container 26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e: Status 404 returned error can't find the container with id 26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.171611 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vd22g" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" containerID="cri-o://b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" gracePeriod=2 Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.171914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d4hsh" event={"ID":"954076ba-3e6f-4e5b-9b3f-4637840d5021","Type":"ContainerStarted","Data":"26e6852aede836d510dec6ed21dc43eba7b1a360ecbf541f514fbc239825f15e"} Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.410256 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-skd24" Jan 24 07:07:54 crc kubenswrapper[4675]: I0124 07:07:54.591343 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-c4k6t" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.057447 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.175741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") pod \"d5597e1b-5874-4483-bf56-679470f1a288\" (UID: \"d5597e1b-5874-4483-bf56-679470f1a288\") " Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177890 4675 generic.go:334] "Generic (PLEG): container finished" podID="d5597e1b-5874-4483-bf56-679470f1a288" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" exitCode=0 Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177948 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerDied","Data":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.177998 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd22g" event={"ID":"d5597e1b-5874-4483-bf56-679470f1a288","Type":"ContainerDied","Data":"65792c9a12bd755b36506910452a8b4e058f9cc6c39651f649e047267f59c4e5"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.178015 4675 scope.go:117] "RemoveContainer" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.178101 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd22g" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.181574 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d4hsh" event={"ID":"954076ba-3e6f-4e5b-9b3f-4637840d5021","Type":"ContainerStarted","Data":"b47c5bd61aa6b00f6f74f8fd52af5ce36b4de3a3c6e98a3aa959ca097f810639"} Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.182975 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz" (OuterVolumeSpecName: "kube-api-access-cx6zz") pod "d5597e1b-5874-4483-bf56-679470f1a288" (UID: "d5597e1b-5874-4483-bf56-679470f1a288"). InnerVolumeSpecName "kube-api-access-cx6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.201716 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d4hsh" podStartSLOduration=1.5279759560000001 podStartE2EDuration="2.20169601s" podCreationTimestamp="2026-01-24 07:07:53 +0000 UTC" firstStartedPulling="2026-01-24 07:07:54.112984096 +0000 UTC m=+875.409089309" lastFinishedPulling="2026-01-24 07:07:54.78670413 +0000 UTC m=+876.082809363" observedRunningTime="2026-01-24 07:07:55.20000831 +0000 UTC m=+876.496113533" watchObservedRunningTime="2026-01-24 07:07:55.20169601 +0000 UTC m=+876.497801243" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.209466 4675 scope.go:117] "RemoveContainer" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: E0124 07:07:55.209967 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": container with ID starting with b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187 not found: ID does not exist" containerID="b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.209995 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187"} err="failed to get container status \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": rpc error: code = NotFound desc = could not find container \"b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187\": container with ID starting with b04bba469b8d03edf3be551da13d5294dcdf1dad1748db76b0b3a25b900b3187 not found: ID does not exist" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.277358 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx6zz\" (UniqueName: \"kubernetes.io/projected/d5597e1b-5874-4483-bf56-679470f1a288-kube-api-access-cx6zz\") on node \"crc\" DevicePath \"\"" Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.514429 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:55 crc kubenswrapper[4675]: I0124 07:07:55.522265 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vd22g"] Jan 24 07:07:56 crc kubenswrapper[4675]: I0124 07:07:56.956623 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5597e1b-5874-4483-bf56-679470f1a288" path="/var/lib/kubelet/pods/d5597e1b-5874-4483-bf56-679470f1a288/volumes" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.737697 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.738331 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:03 crc kubenswrapper[4675]: I0124 07:08:03.779907 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:04 crc kubenswrapper[4675]: I0124 07:08:04.284433 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-d4hsh" Jan 24 07:08:04 crc kubenswrapper[4675]: I0124 07:08:04.960127 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-78f4w" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670085 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:05 crc kubenswrapper[4675]: E0124 07:08:05.670373 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670393 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.670563 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5597e1b-5874-4483-bf56-679470f1a288" containerName="registry-server" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.671859 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.674229 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-knqct" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.684494 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.821958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.822028 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.822063 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923216 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923241 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.923834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.943392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:05 crc kubenswrapper[4675]: I0124 07:08:05.990323 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:06 crc kubenswrapper[4675]: I0124 07:08:06.255422 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk"] Jan 24 07:08:07 crc kubenswrapper[4675]: I0124 07:08:07.261871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerStarted","Data":"63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c"} Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.269585 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="02729d3552e2026e205ce65c084a59e6a889858715a6349b9e2e4899a4af312d" exitCode=0 Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.269679 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"02729d3552e2026e205ce65c084a59e6a889858715a6349b9e2e4899a4af312d"} Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.630493 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:08:08 crc kubenswrapper[4675]: I0124 07:08:08.630550 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:08:09 crc kubenswrapper[4675]: I0124 07:08:09.280089 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="dd726de03c77019531350763caa8896216b2bcc66927ae88a63165f351c168f1" exitCode=0 Jan 24 07:08:09 crc kubenswrapper[4675]: I0124 07:08:09.280128 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"dd726de03c77019531350763caa8896216b2bcc66927ae88a63165f351c168f1"} Jan 24 07:08:10 crc kubenswrapper[4675]: I0124 07:08:10.291219 4675 generic.go:334] "Generic (PLEG): container finished" podID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerID="15254b01f8855ef68b4f6025f1248a4d1429df62e079d5e80ebb3f6b15482e0d" exitCode=0 Jan 24 07:08:10 crc kubenswrapper[4675]: I0124 07:08:10.291352 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"15254b01f8855ef68b4f6025f1248a4d1429df62e079d5e80ebb3f6b15482e0d"} Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.587229 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704615 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704668 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.704708 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") pod \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\" (UID: \"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d\") " Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.705420 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle" (OuterVolumeSpecName: "bundle") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.713026 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz" (OuterVolumeSpecName: "kube-api-access-ptprz") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "kube-api-access-ptprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.720990 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util" (OuterVolumeSpecName: "util") pod "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" (UID: "ad9d9d8b-0730-4dc0-bd02-77a7db0b842d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806187 4675 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806217 4675 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-util\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:11 crc kubenswrapper[4675]: I0124 07:08:11.806229 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptprz\" (UniqueName: \"kubernetes.io/projected/ad9d9d8b-0730-4dc0-bd02-77a7db0b842d-kube-api-access-ptprz\") on node \"crc\" DevicePath \"\"" Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304532 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" event={"ID":"ad9d9d8b-0730-4dc0-bd02-77a7db0b842d","Type":"ContainerDied","Data":"63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c"} Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304563 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b8bbc6f3983e757a4b92d9d6c2745edb3e94b79f32c6f55d0bfb6c5ed3ef4c" Jan 24 07:08:12 crc kubenswrapper[4675]: I0124 07:08:12.304601 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.766742 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767494 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="util" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767507 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="util" Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767519 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="pull" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767525 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="pull" Jan 24 07:08:17 crc kubenswrapper[4675]: E0124 07:08:17.767533 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767538 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.767645 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9d9d8b-0730-4dc0-bd02-77a7db0b842d" containerName="extract" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.768025 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.771098 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z2v5x" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.810036 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.884675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:17 crc kubenswrapper[4675]: I0124 07:08:17.985615 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.008968 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cqv\" (UniqueName: \"kubernetes.io/projected/fc267189-e8ca-412c-bb9a-6b251571a514-kube-api-access-l7cqv\") pod \"openstack-operator-controller-init-d498c57f9-4vbdv\" (UID: \"fc267189-e8ca-412c-bb9a-6b251571a514\") " pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.088601 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:18 crc kubenswrapper[4675]: I0124 07:08:18.550591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv"] Jan 24 07:08:19 crc kubenswrapper[4675]: I0124 07:08:19.341242 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" event={"ID":"fc267189-e8ca-412c-bb9a-6b251571a514","Type":"ContainerStarted","Data":"bc0512432a942e266fb15578d49b90e9d2e33b65a7bcf3d6c4a2d202d504d91f"} Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.382341 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" event={"ID":"fc267189-e8ca-412c-bb9a-6b251571a514","Type":"ContainerStarted","Data":"2b32da462870f3e911e342621f0621b2f306a6bd768e898fba088e501697c129"} Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.383006 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:24 crc kubenswrapper[4675]: I0124 07:08:24.419264 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" podStartSLOduration=2.630818585 podStartE2EDuration="7.419248596s" podCreationTimestamp="2026-01-24 07:08:17 +0000 UTC" firstStartedPulling="2026-01-24 07:08:18.559226664 +0000 UTC m=+899.855331877" lastFinishedPulling="2026-01-24 07:08:23.347656665 +0000 UTC m=+904.643761888" observedRunningTime="2026-01-24 07:08:24.416185912 +0000 UTC m=+905.712291155" watchObservedRunningTime="2026-01-24 07:08:24.419248596 +0000 UTC m=+905.715353819" Jan 24 07:08:28 crc kubenswrapper[4675]: I0124 07:08:28.092487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-d498c57f9-4vbdv" Jan 24 07:08:38 crc kubenswrapper[4675]: I0124 07:08:38.630487 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:08:38 crc kubenswrapper[4675]: I0124 07:08:38.631092 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.319315 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.320917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.324041 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p5nc5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.332625 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.383871 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.384733 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.389944 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b7mrc" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.396699 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.397604 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.400396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.400438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.413499 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jccng" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.421034 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.440470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.453320 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.454220 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.458363 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-g4zgx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503234 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503308 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.503339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.519484 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.536700 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfws\" (UniqueName: \"kubernetes.io/projected/2db25911-f36e-43ae-8f47-b042ec82266e-kube-api-access-2zfws\") pod \"barbican-operator-controller-manager-7f86f8796f-dwbq6\" (UID: \"2db25911-f36e-43ae-8f47-b042ec82266e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.550338 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbk4x\" (UniqueName: \"kubernetes.io/projected/b8285f65-9930-4bb9-9e18-b6ffe19f45fb-kube-api-access-tbk4x\") pod \"cinder-operator-controller-manager-69cf5d4557-6jbwg\" (UID: \"b8285f65-9930-4bb9-9e18-b6ffe19f45fb\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.579781 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.580581 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.593124 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dvdjg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.594682 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.595375 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.600569 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tk46k" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.600813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610813 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.610838 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.626929 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.627732 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.629784 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.634029 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nhg7j" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.634204 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.640733 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.648782 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.671113 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.672019 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.673958 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpth\" (UniqueName: \"kubernetes.io/projected/e7263d16-14c3-4254-821a-cbf99b7cf3e4-kube-api-access-sdpth\") pod \"glance-operator-controller-manager-78fdd796fd-thqtz\" (UID: \"e7263d16-14c3-4254-821a-cbf99b7cf3e4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.682545 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplvd\" (UniqueName: \"kubernetes.io/projected/6003a1f9-ad0e-49f6-8750-6ac2208560cc-kube-api-access-gplvd\") pod \"designate-operator-controller-manager-b45d7bf98-79fwx\" (UID: \"6003a1f9-ad0e-49f6-8750-6ac2208560cc\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.698031 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.702093 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-72gdv" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714379 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714431 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714486 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.714542 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.716525 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.721966 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.736427 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.737250 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.744054 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bznx5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.757449 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhr9k\" (UniqueName: \"kubernetes.io/projected/7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320-kube-api-access-nhr9k\") pod \"heat-operator-controller-manager-594c8c9d5d-mqk98\" (UID: \"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.780050 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.788644 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.789434 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.797073 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.797799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.805297 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kgmqr" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.805565 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g75sx" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.808954 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817486 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817530 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.817558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: E0124 07:08:47.817910 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:47 crc kubenswrapper[4675]: E0124 07:08:47.817950 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:48.31793455 +0000 UTC m=+929.614039773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.821169 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.855507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bcb\" (UniqueName: \"kubernetes.io/projected/06f423e8-7ba9-497d-a587-cc880d66625b-kube-api-access-j8bcb\") pod \"ironic-operator-controller-manager-598f7747c9-l7jq5\" (UID: \"06f423e8-7ba9-497d-a587-cc880d66625b\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.863852 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkslj\" (UniqueName: \"kubernetes.io/projected/4aa5aa88-c6f2-4000-9a9d-3b14e23220de-kube-api-access-kkslj\") pod \"horizon-operator-controller-manager-77d5c5b54f-67vkh\" (UID: \"4aa5aa88-c6f2-4000-9a9d-3b14e23220de\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.894403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkwvr\" (UniqueName: \"kubernetes.io/projected/743af71f-3542-439c-b3a1-33a7b9ae34f1-kube-api-access-pkwvr\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.914798 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919563 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.919627 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.942309 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.942834 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.976682 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8stn\" (UniqueName: \"kubernetes.io/projected/e09ce8a8-a2a4-4fec-b36d-a97910aced0f-kube-api-access-z8stn\") pod \"manila-operator-controller-manager-78c6999f6f-6lq96\" (UID: \"e09ce8a8-a2a4-4fec-b36d-a97910aced0f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:47 crc kubenswrapper[4675]: I0124 07:08:47.977450 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.004419 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.006469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn97d\" (UniqueName: \"kubernetes.io/projected/5b3a45f7-a1eb-44a2-b0be-7c77b190d50c-kube-api-access-mn97d\") pod \"keystone-operator-controller-manager-b8b6d4659-bqd4q\" (UID: \"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.029055 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p66r4" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.029759 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.030487 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.044208 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.044385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.045575 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.048404 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ljqq\" (UniqueName: \"kubernetes.io/projected/7660e41e-527d-4806-8ef3-6dee25fa72c5-kube-api-access-8ljqq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-vjf84\" (UID: \"7660e41e-527d-4806-8ef3-6dee25fa72c5\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.051245 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-skxlb" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.065772 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.075029 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.075853 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.088235 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.103454 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mjrhf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.110813 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.111659 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.112298 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.122013 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.122953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.133993 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jx2pj" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148653 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.148814 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mlccn" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.150511 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.158776 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.165786 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.179083 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.179861 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.190239 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4zjqq" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.212253 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.229840 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.244410 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.245514 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.250786 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lp9nd" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265407 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265531 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265559 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265578 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.265603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.266604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qz5\" (UniqueName: \"kubernetes.io/projected/724ac56d-9f4e-40f9-98f7-3a65c807f89c-kube-api-access-99qz5\") pod \"neutron-operator-controller-manager-78d58447c5-dzvlp\" (UID: \"724ac56d-9f4e-40f9-98f7-3a65c807f89c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.305665 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.306194 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pf24\" (UniqueName: \"kubernetes.io/projected/bdc167a3-9335-4b3d-9696-a1d03b9ae618-kube-api-access-5pf24\") pod \"octavia-operator-controller-manager-7bd9774b6-q6qn9\" (UID: \"bdc167a3-9335-4b3d-9696-a1d03b9ae618\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.324972 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.325775 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.336467 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7k4s\" (UniqueName: \"kubernetes.io/projected/6f867475-7eee-431c-97ee-12ae861193c7-kube-api-access-l7k4s\") pod \"nova-operator-controller-manager-6b8bc8d87d-4lmvf\" (UID: \"6f867475-7eee-431c-97ee-12ae861193c7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.339458 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l9j7r" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.345977 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.346974 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369657 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369696 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369776 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369794 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369819 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.369844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371066 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371155 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:48.87113958 +0000 UTC m=+930.167244803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371362 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.371397 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.371387225 +0000 UTC m=+930.667492458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.381875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.385935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rc85t" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.390271 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.412601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbsl\" (UniqueName: \"kubernetes.io/projected/20b0ee18-4569-4428-956f-d8795904f368-kube-api-access-bjbsl\") pod \"placement-operator-controller-manager-5d646b7d76-l5hrz\" (UID: \"20b0ee18-4569-4428-956f-d8795904f368\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.413021 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.423250 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.424640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtsj2\" (UniqueName: \"kubernetes.io/projected/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-kube-api-access-jtsj2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.439955 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.441530 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.447966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmzsm\" (UniqueName: \"kubernetes.io/projected/a1041f21-5d7d-4b17-84ff-ee83332e604d-kube-api-access-rmzsm\") pod \"ovn-operator-controller-manager-55db956ddc-n4kll\" (UID: \"a1041f21-5d7d-4b17-84ff-ee83332e604d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.448245 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.448552 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.457735 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9ldcr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.469941 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.470840 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.512842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng2kw\" (UniqueName: \"kubernetes.io/projected/47e89f8e-f652-43a1-a36a-2db184700f3e-kube-api-access-ng2kw\") pod \"telemetry-operator-controller-manager-85cd9769bb-n6jmw\" (UID: \"47e89f8e-f652-43a1-a36a-2db184700f3e\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.513553 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnq2\" (UniqueName: \"kubernetes.io/projected/4bfb9011-058d-494d-96ce-a39202c7b851-kube-api-access-8lnq2\") pod \"swift-operator-controller-manager-7d55b89685-9rvmf\" (UID: \"4bfb9011-058d-494d-96ce-a39202c7b851\") " pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.557005 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.572165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.572237 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.584086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.646355 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.648043 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.712734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.713272 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.725266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6z8w\" (UniqueName: \"kubernetes.io/projected/fae349a1-6c08-4424-abe2-42dddccd55cc-kube-api-access-s6z8w\") pod \"test-operator-controller-manager-69797bbcbd-k7crk\" (UID: \"fae349a1-6c08-4424-abe2-42dddccd55cc\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.731742 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.732707 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nv27l" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.732869 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.817819 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.826902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.826995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.827030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.841437 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.842276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.848760 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vq2z\" (UniqueName: \"kubernetes.io/projected/f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480-kube-api-access-8vq2z\") pod \"watcher-operator-controller-manager-6d9458688d-9fkjr\" (UID: \"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.851061 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tjzcp" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.853700 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.928619 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.928952 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.929069 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.929185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929442 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929557 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.429542195 +0000 UTC m=+930.725647418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929677 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929761 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.92974186 +0000 UTC m=+931.225847143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929811 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: E0124 07:08:48.929845 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:49.429834022 +0000 UTC m=+930.725939345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:48 crc kubenswrapper[4675]: I0124 07:08:48.970566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdz6d\" (UniqueName: \"kubernetes.io/projected/d94b056e-c445-4033-8d02-a794dae4b671-kube-api-access-zdz6d\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.008945 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.030366 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.105011 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.133005 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.168817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.177042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrf8\" (UniqueName: \"kubernetes.io/projected/b7d1f492-700c-492e-a1c2-eae496f0133c-kube-api-access-tkrf8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9cmpf\" (UID: \"b7d1f492-700c-492e-a1c2-eae496f0133c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.211992 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441516 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441575 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.441653 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441791 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441838 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:51.441821799 +0000 UTC m=+932.737927012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441881 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441904 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:50.441896671 +0000 UTC m=+931.738001894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441934 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.441951 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:50.441945922 +0000 UTC m=+931.738051145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.585593 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" event={"ID":"e7263d16-14c3-4254-821a-cbf99b7cf3e4","Type":"ContainerStarted","Data":"4053dc81ded619efc039589e0c94d1e4e592b2943cf7a7328e5e39cf06982048"} Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.668164 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.685662 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.723813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.740468 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.764591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98"] Jan 24 07:08:49 crc kubenswrapper[4675]: I0124 07:08:49.954440 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.955204 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:49 crc kubenswrapper[4675]: E0124 07:08:49.955264 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:51.955248121 +0000 UTC m=+933.251353344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.008715 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.022174 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.035477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.042103 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.049860 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.064189 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.074832 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll"] Jan 24 07:08:50 crc kubenswrapper[4675]: W0124 07:08:50.100307 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1041f21_5d7d_4b17_84ff_ee83332e604d.slice/crio-5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d WatchSource:0}: Error finding container 5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d: Status 404 returned error can't find the container with id 5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.333419 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99qz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-dzvlp_openstack-operators(724ac56d-9f4e-40f9-98f7-3a65c807f89c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.335019 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.335358 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ng2kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-n6jmw_openstack-operators(47e89f8e-f652-43a1-a36a-2db184700f3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.336482 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.337691 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9"] Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.347494 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjbsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-l5hrz_openstack-operators(20b0ee18-4569-4428-956f-d8795904f368): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.348869 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.351624 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkrf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9cmpf_openstack-operators(b7d1f492-700c-492e-a1c2-eae496f0133c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.353259 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.358485 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vq2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-9fkjr_openstack-operators(f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.359776 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.361791 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf"] Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.364737 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6z8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-k7crk_openstack-operators(fae349a1-6c08-4424-abe2-42dddccd55cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.366296 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.375777 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.382330 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.388970 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.400134 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.410212 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr"] Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.465023 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.465076 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465155 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465212 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465224 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:52.465204899 +0000 UTC m=+933.761310122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.465262 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:52.46524648 +0000 UTC m=+933.761351703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.621578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" event={"ID":"bdc167a3-9335-4b3d-9696-a1d03b9ae618","Type":"ContainerStarted","Data":"fce3ed873352b3493004b7f12851e86940064ee59fb36cfe44b6d0f590037607"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.625755 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" event={"ID":"6f867475-7eee-431c-97ee-12ae861193c7","Type":"ContainerStarted","Data":"b5547f9e5f6a00940af360ed218e79f5b9ac2007b13ea38c20233553047b9abe"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.634148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" event={"ID":"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c","Type":"ContainerStarted","Data":"c99108618bd169b32757bf909a7174483c424c7f08fb210aa89d23c3d6e3cba6"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.640671 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" event={"ID":"6003a1f9-ad0e-49f6-8750-6ac2208560cc","Type":"ContainerStarted","Data":"f1e05f41e78633866be273d44341a8e2f9eb9cb7a3dec40bc58871323e2dba19"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.661913 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" event={"ID":"47e89f8e-f652-43a1-a36a-2db184700f3e","Type":"ContainerStarted","Data":"4c99997aae050a8edbbe41f3b19aa65e8448ec5f6d041c7f49781fd5bad7b72f"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.664042 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" event={"ID":"b7d1f492-700c-492e-a1c2-eae496f0133c","Type":"ContainerStarted","Data":"bfd8c3fa740f2aef55a90aadb35c7febf7b161cb0a2bd830ac2fd795b119eade"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.667098 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" event={"ID":"724ac56d-9f4e-40f9-98f7-3a65c807f89c","Type":"ContainerStarted","Data":"25414a42c86c2ab6df8f50725eec9c0eca9a569caa144a89ee212298468b3628"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.668158 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.670589 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" event={"ID":"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320","Type":"ContainerStarted","Data":"1b59696e894c8157e6c64d947458f64b4320dcc604f4a05fd3b4036aa5813d5c"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.671064 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.671225 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.672357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" event={"ID":"2db25911-f36e-43ae-8f47-b042ec82266e","Type":"ContainerStarted","Data":"d4bed27cff2eabfb8c25285528887e3a70feb9940f93a0bdfa2460613ed2771f"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.688456 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" event={"ID":"b8285f65-9930-4bb9-9e18-b6ffe19f45fb","Type":"ContainerStarted","Data":"c7c2f86a133de577b33d63eea556b7c9b1470aef1c2015f7c53bc997ec1dbfeb"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.699277 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" event={"ID":"a1041f21-5d7d-4b17-84ff-ee83332e604d","Type":"ContainerStarted","Data":"5fcd60245292c84bb47812631264b2d215fdc23f0fea80c4102663b139582e1d"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.709077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" event={"ID":"4bfb9011-058d-494d-96ce-a39202c7b851","Type":"ContainerStarted","Data":"2f4654a2a38ec5baf7e572a02b42acadf88bd8b7324099b6a00bf4345aca34cd"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.732104 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" event={"ID":"e09ce8a8-a2a4-4fec-b36d-a97910aced0f","Type":"ContainerStarted","Data":"0ad7f68cf58bc6d341e8795389c57b293effebebd8a3888f5220c7b91f0aacc6"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.734485 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" event={"ID":"fae349a1-6c08-4424-abe2-42dddccd55cc","Type":"ContainerStarted","Data":"a34a9f6f3a30b7b468bd8304b834fa90c36ad04497f3848d649af1d9008f5b7a"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.739714 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.740059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" event={"ID":"20b0ee18-4569-4428-956f-d8795904f368","Type":"ContainerStarted","Data":"d40fcf26a929d07e25381dc52b422c987ee8d2debd4c9e844c671b533fcdbbde"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.742878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.755095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" event={"ID":"4aa5aa88-c6f2-4000-9a9d-3b14e23220de","Type":"ContainerStarted","Data":"d417a22a7974fc952028863ef25535bbc30f67500cadb12266638641e5211b05"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.759089 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" event={"ID":"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480","Type":"ContainerStarted","Data":"0a0985f908884d42b0c2caf26278516449cb3caaef568692c6a155d434d43662"} Jan 24 07:08:50 crc kubenswrapper[4675]: E0124 07:08:50.760662 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.761902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" event={"ID":"7660e41e-527d-4806-8ef3-6dee25fa72c5","Type":"ContainerStarted","Data":"a39cf7750ebe2640001bfc7a264c967c35007bed7fa7dcd37ba31301a285993b"} Jan 24 07:08:50 crc kubenswrapper[4675]: I0124 07:08:50.795619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" event={"ID":"06f423e8-7ba9-497d-a587-cc880d66625b","Type":"ContainerStarted","Data":"1e41838d4284600ddbd83652937577974fd8e8d7062fe0166461d959137f4297"} Jan 24 07:08:51 crc kubenswrapper[4675]: I0124 07:08:51.480892 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.481040 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.481106 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:55.481087827 +0000 UTC m=+936.777193050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807732 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podUID="724ac56d-9f4e-40f9-98f7-3a65c807f89c" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807745 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podUID="fae349a1-6c08-4424-abe2-42dddccd55cc" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807748 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podUID="20b0ee18-4569-4428-956f-d8795904f368" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.807878 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podUID="f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.808040 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podUID="47e89f8e-f652-43a1-a36a-2db184700f3e" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.812617 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:08:51 crc kubenswrapper[4675]: I0124 07:08:51.986914 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.987504 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:51 crc kubenswrapper[4675]: E0124 07:08:51.987756 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:55.987579201 +0000 UTC m=+937.283684424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493424 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493497 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:56.493481941 +0000 UTC m=+937.789587164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: I0124 07:08:52.493306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:52 crc kubenswrapper[4675]: I0124 07:08:52.493898 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.493982 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:52 crc kubenswrapper[4675]: E0124 07:08:52.494013 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:08:56.494005524 +0000 UTC m=+937.790110747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:08:55 crc kubenswrapper[4675]: I0124 07:08:55.543871 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:08:55 crc kubenswrapper[4675]: E0124 07:08:55.543995 4675 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:55 crc kubenswrapper[4675]: E0124 07:08:55.544705 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert podName:743af71f-3542-439c-b3a1-33a7b9ae34f1 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:03.544680454 +0000 UTC m=+944.840785677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert") pod "infra-operator-controller-manager-694cf4f878-c5658" (UID: "743af71f-3542-439c-b3a1-33a7b9ae34f1") : secret "infra-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.055868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.056038 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.056138 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.056116117 +0000 UTC m=+945.352221350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.561908 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:56 crc kubenswrapper[4675]: I0124 07:08:56.561960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562112 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562213 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.562191161 +0000 UTC m=+945.858296394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562124 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:08:56 crc kubenswrapper[4675]: E0124 07:08:56.562307 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:04.562287053 +0000 UTC m=+945.858392296 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.393822 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.394470 4675 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.394628 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8lnq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7d55b89685-9rvmf_openstack-operators(4bfb9011-058d-494d-96ce-a39202c7b851): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.395923 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podUID="4bfb9011-058d-494d-96ce-a39202c7b851" Jan 24 07:09:01 crc kubenswrapper[4675]: E0124 07:09:01.978202 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.46:5001/openstack-k8s-operators/swift-operator:3f3d4b5b93dec19b0d73b14b970587e1a5690ecb\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podUID="4bfb9011-058d-494d-96ce-a39202c7b851" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.074707 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.074985 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmzsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-n4kll_openstack-operators(a1041f21-5d7d-4b17-84ff-ee83332e604d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.076665 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podUID="a1041f21-5d7d-4b17-84ff-ee83332e604d" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.576595 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.583589 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/743af71f-3542-439c-b3a1-33a7b9ae34f1-cert\") pod \"infra-operator-controller-manager-694cf4f878-c5658\" (UID: \"743af71f-3542-439c-b3a1-33a7b9ae34f1\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: I0124 07:09:03.850280 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:03 crc kubenswrapper[4675]: E0124 07:09:03.994346 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podUID="a1041f21-5d7d-4b17-84ff-ee83332e604d" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.083823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.083976 4675 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.084066 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert podName:ac97fbc7-211e-41e3-8e16-aff853a7c9f4 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.084043858 +0000 UTC m=+961.380149081 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" (UID: "ac97fbc7-211e-41e3-8e16-aff853a7c9f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.401272 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.401523 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-mqk98_openstack-operators(7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.403534 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podUID="7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.590138 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:04 crc kubenswrapper[4675]: I0124 07:09:04.590219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590298 4675 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590353 4675 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590370 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.590350327 +0000 UTC m=+961.886455550 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "webhook-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.590389 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs podName:d94b056e-c445-4033-8d02-a794dae4b671 nodeName:}" failed. No retries permitted until 2026-01-24 07:09:20.590379117 +0000 UTC m=+961.886484340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs") pod "openstack-operator-controller-manager-688fccdd58-dkxf7" (UID: "d94b056e-c445-4033-8d02-a794dae4b671") : secret "metrics-server-cert" not found Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.965905 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.966213 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pf24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-q6qn9_openstack-operators(bdc167a3-9335-4b3d-9696-a1d03b9ae618): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:04 crc kubenswrapper[4675]: E0124 07:09:04.967414 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podUID="bdc167a3-9335-4b3d-9696-a1d03b9ae618" Jan 24 07:09:05 crc kubenswrapper[4675]: E0124 07:09:05.002025 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podUID="7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320" Jan 24 07:09:05 crc kubenswrapper[4675]: E0124 07:09:05.008075 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podUID="bdc167a3-9335-4b3d-9696-a1d03b9ae618" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.428563 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.429421 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8stn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-6lq96_openstack-operators(e09ce8a8-a2a4-4fec-b36d-a97910aced0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:06 crc kubenswrapper[4675]: E0124 07:09:06.430705 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podUID="e09ce8a8-a2a4-4fec-b36d-a97910aced0f" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.005471 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.005635 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zfws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-dwbq6_openstack-operators(2db25911-f36e-43ae-8f47-b042ec82266e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.008098 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podUID="2db25911-f36e-43ae-8f47-b042ec82266e" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.018817 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podUID="2db25911-f36e-43ae-8f47-b042ec82266e" Jan 24 07:09:07 crc kubenswrapper[4675]: E0124 07:09:07.020602 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podUID="e09ce8a8-a2a4-4fec-b36d-a97910aced0f" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630153 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630242 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.630305 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.631273 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:09:08 crc kubenswrapper[4675]: I0124 07:09:08.631352 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" gracePeriod=600 Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.040981 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" exitCode=0 Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.041058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d"} Jan 24 07:09:10 crc kubenswrapper[4675]: I0124 07:09:10.041174 4675 scope.go:117] "RemoveContainer" containerID="ac5cd34383b94a74f69690862b304069f07aa99a5c5c4c95b3f3f978f0196984" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.071502 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.072140 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gplvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-79fwx_openstack-operators(6003a1f9-ad0e-49f6-8750-6ac2208560cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.073876 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podUID="6003a1f9-ad0e-49f6-8750-6ac2208560cc" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.578973 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.579175 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn97d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bqd4q_openstack-operators(5b3a45f7-a1eb-44a2-b0be-7c77b190d50c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:14 crc kubenswrapper[4675]: E0124 07:09:14.581220 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podUID="5b3a45f7-a1eb-44a2-b0be-7c77b190d50c" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.074509 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podUID="5b3a45f7-a1eb-44a2-b0be-7c77b190d50c" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.074592 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podUID="6003a1f9-ad0e-49f6-8750-6ac2208560cc" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.099947 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.100174 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7k4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-4lmvf_openstack-operators(6f867475-7eee-431c-97ee-12ae861193c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:15 crc kubenswrapper[4675]: E0124 07:09:15.101235 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podUID="6f867475-7eee-431c-97ee-12ae861193c7" Jan 24 07:09:16 crc kubenswrapper[4675]: E0124 07:09:16.082907 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podUID="6f867475-7eee-431c-97ee-12ae861193c7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.182353 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.188977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac97fbc7-211e-41e3-8e16-aff853a7c9f4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk\" (UID: \"ac97fbc7-211e-41e3-8e16-aff853a7c9f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.292376 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jx2pj" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.300938 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.688588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.688630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.692302 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-webhook-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.695568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d94b056e-c445-4033-8d02-a794dae4b671-metrics-certs\") pod \"openstack-operator-controller-manager-688fccdd58-dkxf7\" (UID: \"d94b056e-c445-4033-8d02-a794dae4b671\") " pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.893406 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nv27l" Jan 24 07:09:20 crc kubenswrapper[4675]: I0124 07:09:20.901387 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.207118 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-c5658"] Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.309153 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.309329 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkrf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9cmpf_openstack-operators(b7d1f492-700c-492e-a1c2-eae496f0133c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:09:24 crc kubenswrapper[4675]: E0124 07:09:24.310448 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.623403 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7"] Jan 24 07:09:24 crc kubenswrapper[4675]: W0124 07:09:24.651017 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd94b056e_c445_4033_8d02_a794dae4b671.slice/crio-9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36 WatchSource:0}: Error finding container 9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36: Status 404 returned error can't find the container with id 9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36 Jan 24 07:09:24 crc kubenswrapper[4675]: I0124 07:09:24.869813 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk"] Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.165303 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" event={"ID":"724ac56d-9f4e-40f9-98f7-3a65c807f89c","Type":"ContainerStarted","Data":"82f1f45085e218f1f55a9136e3254ccdc1fa211287ae233a8c55cac8dffefc53"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.166092 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.177312 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.182613 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" event={"ID":"06f423e8-7ba9-497d-a587-cc880d66625b","Type":"ContainerStarted","Data":"56c1fd10638f29dafeb27ffa7115cdeb003a7359ee1ed235b339f0179c0d35cc"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.183210 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.191480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" event={"ID":"f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480","Type":"ContainerStarted","Data":"4c92742f8a5e535bc6b5f5904e7da02110afdab55404e255c0f6ea8cc39c7423"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.192196 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.197470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" event={"ID":"b8285f65-9930-4bb9-9e18-b6ffe19f45fb","Type":"ContainerStarted","Data":"339be70683573a1353b9db72eee041412f640bbf2cc7893012895c264456fba1"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.197991 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.198786 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" event={"ID":"ac97fbc7-211e-41e3-8e16-aff853a7c9f4","Type":"ContainerStarted","Data":"2068c809a181525b69fe05d3809785c8a1eec718c2185a28899d2b3b8b803b19"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.199462 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" event={"ID":"743af71f-3542-439c-b3a1-33a7b9ae34f1","Type":"ContainerStarted","Data":"ca040ce6c73ff89cb1d8cfda66434cdcc02438701f3d9eca0d03356fe8eb4802"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.200435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" event={"ID":"4aa5aa88-c6f2-4000-9a9d-3b14e23220de","Type":"ContainerStarted","Data":"d49831cb2e97f93fa52d785421513bd3f869a61da647ab699136fbb0a62dfb1a"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.200795 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.217283 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" event={"ID":"e7263d16-14c3-4254-821a-cbf99b7cf3e4","Type":"ContainerStarted","Data":"c51f3414571300129f4bfcc079c05de17dda99f9d8d9cd466fb10edcbcd997bf"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.217897 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.225262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" podStartSLOduration=4.782385576 podStartE2EDuration="38.225244253s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.333133516 +0000 UTC m=+931.629238739" lastFinishedPulling="2026-01-24 07:09:23.775992173 +0000 UTC m=+965.072097416" observedRunningTime="2026-01-24 07:09:25.215293184 +0000 UTC m=+966.511398417" watchObservedRunningTime="2026-01-24 07:09:25.225244253 +0000 UTC m=+966.521349476" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.229929 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" event={"ID":"d94b056e-c445-4033-8d02-a794dae4b671","Type":"ContainerStarted","Data":"9a6a680419924b53ec911b1861b4a7af348a32eb50ff1c46774d13c781f0cb36"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.245206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" event={"ID":"7660e41e-527d-4806-8ef3-6dee25fa72c5","Type":"ContainerStarted","Data":"944fea83c7bc11a1e40d8b451a4db69c7df772b2ac9279d851cb339d898de736"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.245854 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.250675 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" event={"ID":"e09ce8a8-a2a4-4fec-b36d-a97910aced0f","Type":"ContainerStarted","Data":"3cbf50aa875898845a1c79a07a81ec6d89cf719b1cf951c5d499758f66c75902"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.250997 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.266934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" event={"ID":"20b0ee18-4569-4428-956f-d8795904f368","Type":"ContainerStarted","Data":"3eddbcb3815937735fbb8f61184b590439d3e5fa93b1b7a3b5bb4c0f9ad7fc72"} Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.267585 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.276232 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" podStartSLOduration=12.959704097 podStartE2EDuration="38.276214071s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.754156595 +0000 UTC m=+931.050261818" lastFinishedPulling="2026-01-24 07:09:15.070666569 +0000 UTC m=+956.366771792" observedRunningTime="2026-01-24 07:09:25.264927699 +0000 UTC m=+966.561032922" watchObservedRunningTime="2026-01-24 07:09:25.276214071 +0000 UTC m=+966.572319294" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.295422 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" podStartSLOduration=12.351475161 podStartE2EDuration="38.295406824s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048923347 +0000 UTC m=+931.345028570" lastFinishedPulling="2026-01-24 07:09:15.99285501 +0000 UTC m=+957.288960233" observedRunningTime="2026-01-24 07:09:25.293712333 +0000 UTC m=+966.589817556" watchObservedRunningTime="2026-01-24 07:09:25.295406824 +0000 UTC m=+966.591512047" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.324980 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" podStartSLOduration=3.907059319 podStartE2EDuration="37.324963706s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.358357033 +0000 UTC m=+931.654462256" lastFinishedPulling="2026-01-24 07:09:23.77626138 +0000 UTC m=+965.072366643" observedRunningTime="2026-01-24 07:09:25.322110557 +0000 UTC m=+966.618215780" watchObservedRunningTime="2026-01-24 07:09:25.324963706 +0000 UTC m=+966.621068929" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.353494 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" podStartSLOduration=12.979057794 podStartE2EDuration="38.353476293s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.696168978 +0000 UTC m=+930.992274201" lastFinishedPulling="2026-01-24 07:09:15.070587477 +0000 UTC m=+956.366692700" observedRunningTime="2026-01-24 07:09:25.349880387 +0000 UTC m=+966.645985610" watchObservedRunningTime="2026-01-24 07:09:25.353476293 +0000 UTC m=+966.649581516" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.386161 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" podStartSLOduration=12.440857325 podStartE2EDuration="38.38613859s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048975549 +0000 UTC m=+931.345080772" lastFinishedPulling="2026-01-24 07:09:15.994256814 +0000 UTC m=+957.290362037" observedRunningTime="2026-01-24 07:09:25.382098383 +0000 UTC m=+966.678203596" watchObservedRunningTime="2026-01-24 07:09:25.38613859 +0000 UTC m=+966.682243823" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.407164 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" podStartSLOduration=3.979516307 podStartE2EDuration="37.407151187s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.347352329 +0000 UTC m=+931.643457552" lastFinishedPulling="2026-01-24 07:09:23.774987209 +0000 UTC m=+965.071092432" observedRunningTime="2026-01-24 07:09:25.402068494 +0000 UTC m=+966.698173717" watchObservedRunningTime="2026-01-24 07:09:25.407151187 +0000 UTC m=+966.703256400" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.422867 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" podStartSLOduration=4.093468127 podStartE2EDuration="38.422850475s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.074847452 +0000 UTC m=+931.370952675" lastFinishedPulling="2026-01-24 07:09:24.4042298 +0000 UTC m=+965.700335023" observedRunningTime="2026-01-24 07:09:25.422363373 +0000 UTC m=+966.718468596" watchObservedRunningTime="2026-01-24 07:09:25.422850475 +0000 UTC m=+966.718955688" Jan 24 07:09:25 crc kubenswrapper[4675]: I0124 07:09:25.443787 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" podStartSLOduration=13.642949321 podStartE2EDuration="38.443767049s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.271482934 +0000 UTC m=+930.567588157" lastFinishedPulling="2026-01-24 07:09:14.072300662 +0000 UTC m=+955.368405885" observedRunningTime="2026-01-24 07:09:25.440211403 +0000 UTC m=+966.736316626" watchObservedRunningTime="2026-01-24 07:09:25.443767049 +0000 UTC m=+966.739872272" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.281008 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" event={"ID":"4bfb9011-058d-494d-96ce-a39202c7b851","Type":"ContainerStarted","Data":"dbfbe8ce8a09ab164a80f34f2d0aee9706c3fd8e799a3beb69f09761934a1fe1"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.282145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.297375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" event={"ID":"7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320","Type":"ContainerStarted","Data":"99c5a18fd4b502378214dd0e94d6b9430cd885e614e1a777f5570eb696e8cece"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.325536 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" event={"ID":"bdc167a3-9335-4b3d-9696-a1d03b9ae618","Type":"ContainerStarted","Data":"1a2655619446c03c92907cb0e7f8741bdc37601beb16151b18dae586cf8e6643"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.325859 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.339652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" event={"ID":"2db25911-f36e-43ae-8f47-b042ec82266e","Type":"ContainerStarted","Data":"ed20e3deaf284e65bf65231423658cb6b0d9aaa0b963876f87e5e4d33be9c3df"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.340311 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.356183 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" event={"ID":"d94b056e-c445-4033-8d02-a794dae4b671","Type":"ContainerStarted","Data":"6912401445e9fb503c6e8d10bebbcf087b09deef2b9aac79008abfb54513abf7"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.356873 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.369154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" event={"ID":"fae349a1-6c08-4424-abe2-42dddccd55cc","Type":"ContainerStarted","Data":"004bed238ea54f9c76a13b219ef4453dd5a7393199c83db5a016e5dc674b8944"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.369435 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.379537 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" podStartSLOduration=3.98592147 podStartE2EDuration="38.379520426s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048544278 +0000 UTC m=+931.344649541" lastFinishedPulling="2026-01-24 07:09:24.442143274 +0000 UTC m=+965.738248497" observedRunningTime="2026-01-24 07:09:26.333103258 +0000 UTC m=+967.629208481" watchObservedRunningTime="2026-01-24 07:09:26.379520426 +0000 UTC m=+967.675625649" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.397882 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" event={"ID":"a1041f21-5d7d-4b17-84ff-ee83332e604d","Type":"ContainerStarted","Data":"ebd8d51b294de492fe33747f19e9ad0bda97bcb927f4ce46858fa17ce1238191"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.398550 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.415773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" event={"ID":"47e89f8e-f652-43a1-a36a-2db184700f3e","Type":"ContainerStarted","Data":"b458189190653b942d980218eda463e4df9fd758369cbaefd7a1dd5139c89606"} Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.497331 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" podStartSLOduration=5.383797908 podStartE2EDuration="39.497315055s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.328610377 +0000 UTC m=+931.624715600" lastFinishedPulling="2026-01-24 07:09:24.442127504 +0000 UTC m=+965.738232747" observedRunningTime="2026-01-24 07:09:26.3899984 +0000 UTC m=+967.686103623" watchObservedRunningTime="2026-01-24 07:09:26.497315055 +0000 UTC m=+967.793420278" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.519797 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" podStartSLOduration=38.519779246 podStartE2EDuration="38.519779246s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:09:26.516536578 +0000 UTC m=+967.812641801" watchObservedRunningTime="2026-01-24 07:09:26.519779246 +0000 UTC m=+967.815884469" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.560764 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" podStartSLOduration=4.637043891 podStartE2EDuration="38.560748994s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.364611675 +0000 UTC m=+931.660716898" lastFinishedPulling="2026-01-24 07:09:24.288316778 +0000 UTC m=+965.584422001" observedRunningTime="2026-01-24 07:09:26.559105925 +0000 UTC m=+967.855211148" watchObservedRunningTime="2026-01-24 07:09:26.560748994 +0000 UTC m=+967.856854217" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.624039 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" podStartSLOduration=4.945337743 podStartE2EDuration="39.624018048s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.705567944 +0000 UTC m=+931.001673167" lastFinishedPulling="2026-01-24 07:09:24.384248249 +0000 UTC m=+965.680353472" observedRunningTime="2026-01-24 07:09:26.623791793 +0000 UTC m=+967.919897016" watchObservedRunningTime="2026-01-24 07:09:26.624018048 +0000 UTC m=+967.920123271" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.678408 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" podStartSLOduration=4.642170715 podStartE2EDuration="38.678391159s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.335193116 +0000 UTC m=+931.631298339" lastFinishedPulling="2026-01-24 07:09:24.37141356 +0000 UTC m=+965.667518783" observedRunningTime="2026-01-24 07:09:26.677535788 +0000 UTC m=+967.973641011" watchObservedRunningTime="2026-01-24 07:09:26.678391159 +0000 UTC m=+967.974496372" Jan 24 07:09:26 crc kubenswrapper[4675]: I0124 07:09:26.682998 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" podStartSLOduration=5.381831992 podStartE2EDuration="39.682983569s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.102508229 +0000 UTC m=+931.398613452" lastFinishedPulling="2026-01-24 07:09:24.403659816 +0000 UTC m=+965.699765029" observedRunningTime="2026-01-24 07:09:26.649160494 +0000 UTC m=+967.945265717" watchObservedRunningTime="2026-01-24 07:09:26.682983569 +0000 UTC m=+967.979088792" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.427154 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" event={"ID":"6003a1f9-ad0e-49f6-8750-6ac2208560cc","Type":"ContainerStarted","Data":"5d48ea2e9f223b4595379807aaf5de239110db8ae85df11b2f2b31404cf510c2"} Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.457137 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" podStartSLOduration=3.8030543 podStartE2EDuration="40.457122633s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.795527492 +0000 UTC m=+931.091632715" lastFinishedPulling="2026-01-24 07:09:26.449595825 +0000 UTC m=+967.745701048" observedRunningTime="2026-01-24 07:09:27.455262159 +0000 UTC m=+968.751367382" watchObservedRunningTime="2026-01-24 07:09:27.457122633 +0000 UTC m=+968.753227856" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.485226 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" podStartSLOduration=5.906924145 podStartE2EDuration="40.48520711s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:49.792869978 +0000 UTC m=+931.088975201" lastFinishedPulling="2026-01-24 07:09:24.371152943 +0000 UTC m=+965.667258166" observedRunningTime="2026-01-24 07:09:27.484172335 +0000 UTC m=+968.780277558" watchObservedRunningTime="2026-01-24 07:09:27.48520711 +0000 UTC m=+968.781312333" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.717709 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:09:27 crc kubenswrapper[4675]: I0124 07:09:27.916103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:09:28 crc kubenswrapper[4675]: I0124 07:09:28.714103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.107902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-9fkjr" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.441469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" event={"ID":"5b3a45f7-a1eb-44a2-b0be-7c77b190d50c","Type":"ContainerStarted","Data":"71fc1a9a6301112ffa0f12e7733faa4bb40b3dad23a7d52a697e21a85c6cbad3"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.441954 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.444122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" event={"ID":"ac97fbc7-211e-41e3-8e16-aff853a7c9f4","Type":"ContainerStarted","Data":"5fbbaf84173124b47edf5d8fdb3463f5aef381f307479799985f207d6e958fde"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.444270 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.447300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" event={"ID":"743af71f-3542-439c-b3a1-33a7b9ae34f1","Type":"ContainerStarted","Data":"9958b7b8c8a2f4a78e3c0a62b8589fba492cb8d6732d30733b03589531e57f21"} Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.448068 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.459787 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" podStartSLOduration=4.990410139 podStartE2EDuration="42.459768998s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048732423 +0000 UTC m=+931.344837646" lastFinishedPulling="2026-01-24 07:09:27.518091282 +0000 UTC m=+968.814196505" observedRunningTime="2026-01-24 07:09:29.456674294 +0000 UTC m=+970.752779517" watchObservedRunningTime="2026-01-24 07:09:29.459768998 +0000 UTC m=+970.755874221" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.486155 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" podStartSLOduration=38.146044946 podStartE2EDuration="42.486136724s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:09:24.894521355 +0000 UTC m=+966.190626578" lastFinishedPulling="2026-01-24 07:09:29.234613123 +0000 UTC m=+970.530718356" observedRunningTime="2026-01-24 07:09:29.484250529 +0000 UTC m=+970.780355752" watchObservedRunningTime="2026-01-24 07:09:29.486136724 +0000 UTC m=+970.782241947" Jan 24 07:09:29 crc kubenswrapper[4675]: I0124 07:09:29.519759 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" podStartSLOduration=37.648835754 podStartE2EDuration="42.519737493s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:09:24.3585637 +0000 UTC m=+965.654668923" lastFinishedPulling="2026-01-24 07:09:29.229465429 +0000 UTC m=+970.525570662" observedRunningTime="2026-01-24 07:09:29.512400507 +0000 UTC m=+970.808505740" watchObservedRunningTime="2026-01-24 07:09:29.519737493 +0000 UTC m=+970.815842716" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.457166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" event={"ID":"6f867475-7eee-431c-97ee-12ae861193c7","Type":"ContainerStarted","Data":"382c4f4579727e4cac0d479ad9cabb3765a193fabf8900f2b0f0c8e80071354e"} Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.458367 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.471563 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" podStartSLOduration=4.098827885 podStartE2EDuration="43.471541948s" podCreationTimestamp="2026-01-24 07:08:47 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.048454456 +0000 UTC m=+931.344559679" lastFinishedPulling="2026-01-24 07:09:29.421168519 +0000 UTC m=+970.717273742" observedRunningTime="2026-01-24 07:09:30.468237978 +0000 UTC m=+971.764343201" watchObservedRunningTime="2026-01-24 07:09:30.471541948 +0000 UTC m=+971.767647171" Jan 24 07:09:30 crc kubenswrapper[4675]: I0124 07:09:30.907411 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-688fccdd58-dkxf7" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.645800 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-dwbq6" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.702346 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-6jbwg" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.727898 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-79fwx" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.787263 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-thqtz" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.921705 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqk98" Jan 24 07:09:37 crc kubenswrapper[4675]: I0124 07:09:37.946579 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-67vkh" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.049502 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l7jq5" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.115409 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bqd4q" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.137224 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6lq96" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.218301 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-vjf84" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.385789 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dzvlp" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.418707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-4lmvf" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.461573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q6qn9" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.474342 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-n4kll" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.559662 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-l5hrz" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.587207 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d55b89685-9rvmf" Jan 24 07:09:38 crc kubenswrapper[4675]: I0124 07:09:38.718666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-n6jmw" Jan 24 07:09:38 crc kubenswrapper[4675]: E0124 07:09:38.948366 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podUID="b7d1f492-700c-492e-a1c2-eae496f0133c" Jan 24 07:09:39 crc kubenswrapper[4675]: I0124 07:09:39.012076 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-k7crk" Jan 24 07:09:40 crc kubenswrapper[4675]: I0124 07:09:40.306911 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk" Jan 24 07:09:43 crc kubenswrapper[4675]: I0124 07:09:43.856291 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-c5658" Jan 24 07:09:53 crc kubenswrapper[4675]: I0124 07:09:53.946126 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:09:55 crc kubenswrapper[4675]: I0124 07:09:55.685451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" event={"ID":"b7d1f492-700c-492e-a1c2-eae496f0133c","Type":"ContainerStarted","Data":"c00a6ff9bfac024388ab78c52a578b491b454e5daa5dda8aea6a75bab92e39c9"} Jan 24 07:09:55 crc kubenswrapper[4675]: I0124 07:09:55.699030 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9cmpf" podStartSLOduration=3.562719721 podStartE2EDuration="1m7.699007753s" podCreationTimestamp="2026-01-24 07:08:48 +0000 UTC" firstStartedPulling="2026-01-24 07:08:50.348362423 +0000 UTC m=+931.644467646" lastFinishedPulling="2026-01-24 07:09:54.484650455 +0000 UTC m=+995.780755678" observedRunningTime="2026-01-24 07:09:55.697523607 +0000 UTC m=+996.993628830" watchObservedRunningTime="2026-01-24 07:09:55.699007753 +0000 UTC m=+996.995112976" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.835292 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.837135 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839253 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839314 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.839855 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.840077 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.841030 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9rr77" Jan 24 07:10:10 crc kubenswrapper[4675]: I0124 07:10:10.846989 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006834 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.006929 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.108634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.108849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.109570 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.110286 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.111180 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.130988 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"dnsmasq-dns-78dd6ddcc-rxp64\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.158088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:11 crc kubenswrapper[4675]: W0124 07:10:11.592117 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46a81ceb_1fea_49de_8e4a_3f4b1dabaa55.slice/crio-900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385 WatchSource:0}: Error finding container 900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385: Status 404 returned error can't find the container with id 900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385 Jan 24 07:10:11 crc kubenswrapper[4675]: I0124 07:10:11.593588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:12 crc kubenswrapper[4675]: I0124 07:10:12.169856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" event={"ID":"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55","Type":"ContainerStarted","Data":"900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385"} Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.607224 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.617922 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.626358 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748735 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748805 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.748849 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850474 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.850583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.852038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.852072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.886227 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"dnsmasq-dns-666b6646f7-rrp8m\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.934082 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:13 crc kubenswrapper[4675]: I0124 07:10:13.982457 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.021558 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.023166 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.034039 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.168683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.169077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.169110 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.271316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.272499 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.271380 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.272849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.273462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.292029 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"dnsmasq-dns-57d769cc4f-c9krd\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.346113 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.489209 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:14 crc kubenswrapper[4675]: W0124 07:10:14.505827 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347ffd58_e301_4dd3_9416_2d6fa5ffdaa7.slice/crio-23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242 WatchSource:0}: Error finding container 23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242: Status 404 returned error can't find the container with id 23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242 Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.781016 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:14 crc kubenswrapper[4675]: W0124 07:10:14.789205 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3186ca49_238e_418a_95e7_f857a9f3bd75.slice/crio-68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436 WatchSource:0}: Error finding container 68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436: Status 404 returned error can't find the container with id 68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436 Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.802328 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.803545 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808240 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnfwj" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808648 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808255 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.808326 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.809556 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.816707 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984733 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984810 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.984835 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985426 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985460 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985492 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:14 crc kubenswrapper[4675]: I0124 07:10:14.985516 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089160 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089209 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089378 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089451 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.089907 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.090200 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.090532 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.103971 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.107527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.109833 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.110693 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.111628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.114130 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.115692 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.117427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.126277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.130460 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.170600 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.194322 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.194800 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.204505 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.204842 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205060 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205089 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205285 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.205369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bt874" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.227595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerStarted","Data":"68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436"} Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.235697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerStarted","Data":"23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242"} Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293685 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.293995 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294022 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294090 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294116 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294181 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.294227 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396457 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396551 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396568 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396601 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396623 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396641 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.396712 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.397129 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.397694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.400282 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.403590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.404141 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.404531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.411813 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.412293 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.412771 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.418668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.436590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.465397 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:15 crc kubenswrapper[4675]: I0124 07:10:15.530215 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.332695 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.333991 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341101 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341345 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6lg82" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341659 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.341822 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.342470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.349668 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411145 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411224 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411254 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411344 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.411415 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512155 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512246 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512347 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.512368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kolla-config\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513694 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.513766 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.514381 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-config-data-default\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.515051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009254f3-9d76-4d89-8e35-d2b4c4be0da8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.517339 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.534383 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/009254f3-9d76-4d89-8e35-d2b4c4be0da8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.538240 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdbsr\" (UniqueName: \"kubernetes.io/projected/009254f3-9d76-4d89-8e35-d2b4c4be0da8-kube-api-access-rdbsr\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.548377 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"009254f3-9d76-4d89-8e35-d2b4c4be0da8\") " pod="openstack/openstack-galera-0" Jan 24 07:10:16 crc kubenswrapper[4675]: I0124 07:10:16.662393 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.558965 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.560182 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.562939 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wv6lc" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.563106 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.563313 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.572835 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.587963 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630212 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630270 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630333 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630410 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630461 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630595 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.630706 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732472 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732498 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732678 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.732944 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.733738 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.733907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.734245 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e189b411-9dd6-496f-a001-41bc90c3fe00-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.735102 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e189b411-9dd6-496f-a001-41bc90c3fe00-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.736803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.737097 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e189b411-9dd6-496f-a001-41bc90c3fe00-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.761203 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qfc\" (UniqueName: \"kubernetes.io/projected/e189b411-9dd6-496f-a001-41bc90c3fe00-kube-api-access-b6qfc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.787318 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e189b411-9dd6-496f-a001-41bc90c3fe00\") " pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.884472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.992247 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.993208 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.996800 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r96rq" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.997129 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 24 07:10:17 crc kubenswrapper[4675]: I0124 07:10:17.999235 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.008629 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.137712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138073 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138251 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.138563 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240312 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240437 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.240462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.241183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-kolla-config\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.241500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2446e52-3d97-46f2-ac99-4bb1af82d302-config-data\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.243849 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.244512 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2446e52-3d97-46f2-ac99-4bb1af82d302-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.279280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnkrr\" (UniqueName: \"kubernetes.io/projected/b2446e52-3d97-46f2-ac99-4bb1af82d302-kube-api-access-rnkrr\") pod \"memcached-0\" (UID: \"b2446e52-3d97-46f2-ac99-4bb1af82d302\") " pod="openstack/memcached-0" Jan 24 07:10:18 crc kubenswrapper[4675]: I0124 07:10:18.310337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.847256 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.849276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.851785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vf6k6" Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.869265 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:19 crc kubenswrapper[4675]: I0124 07:10:19.988548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.090443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.114065 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"kube-state-metrics-0\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " pod="openstack/kube-state-metrics-0" Jan 24 07:10:20 crc kubenswrapper[4675]: I0124 07:10:20.170953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.148799 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.150454 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153264 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153455 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x9sdf" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.153592 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.167795 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.200824 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.202321 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.255974 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.256104 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.288072 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.356991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357275 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357356 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357426 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357449 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357465 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357481 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357508 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357526 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357547 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.357637 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.358539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-run\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.358754 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-var-log-ovn\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.363418 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-ovn-controller-tls-certs\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.369500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-scripts\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.379287 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-combined-ca-bundle\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.393509 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbww\" (UniqueName: \"kubernetes.io/projected/b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1-kube-api-access-8kbww\") pod \"ovn-controller-2x2kb\" (UID: \"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1\") " pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459514 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459595 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459736 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459768 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459934 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-log\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.459996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-run\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.460071 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-etc-ovs\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.460289 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/feda0648-be0d-4fb4-a3a4-42440e47fec0-var-lib\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.462243 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/feda0648-be0d-4fb4-a3a4-42440e47fec0-scripts\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.478703 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/feda0648-be0d-4fb4-a3a4-42440e47fec0-kube-api-access-g468d\") pod \"ovn-controller-ovs-fsln2\" (UID: \"feda0648-be0d-4fb4-a3a4-42440e47fec0\") " pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.546409 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.573337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:23 crc kubenswrapper[4675]: I0124 07:10:23.751701 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.047402 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.049001 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.051908 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-g8snx" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.052181 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.052343 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054654 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054836 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.054911 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221008 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221203 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221310 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221360 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221399 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221414 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221618 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.221678 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323682 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323850 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323911 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323936 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.323956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.324008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.324755 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.325340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.326234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.327540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fa54da-8a94-427d-b8c6-0881657d3324-config\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.330753 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.338776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.339936 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwm66\" (UniqueName: \"kubernetes.io/projected/19fa54da-8a94-427d-b8c6-0881657d3324-kube-api-access-lwm66\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.351074 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.354554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19fa54da-8a94-427d-b8c6-0881657d3324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19fa54da-8a94-427d-b8c6-0881657d3324\") " pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:24 crc kubenswrapper[4675]: I0124 07:10:24.379469 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.927899 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.932132 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.935799 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.935974 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.936085 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.936192 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xzx52" Jan 24 07:10:26 crc kubenswrapper[4675]: I0124 07:10:26.941408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.072902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073180 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073303 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.073335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174256 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174274 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174313 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174340 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.174393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.175208 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.176297 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1d973fa-2671-49fe-82f1-1862aa70d784-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.176610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.177932 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.180825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.181587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.184483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d973fa-2671-49fe-82f1-1862aa70d784-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.191827 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2w4\" (UniqueName: \"kubernetes.io/projected/f1d973fa-2671-49fe-82f1-1862aa70d784-kube-api-access-cw2w4\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.202176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1d973fa-2671-49fe-82f1-1862aa70d784\") " pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:27 crc kubenswrapper[4675]: I0124 07:10:27.286349 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:29 crc kubenswrapper[4675]: I0124 07:10:29.364511 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:10:29 crc kubenswrapper[4675]: I0124 07:10:29.390531 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2446e52-3d97-46f2-ac99-4bb1af82d302","Type":"ContainerStarted","Data":"1e1711dc3ff8bfb93fc2b85d68f325dfc25c3f1c6209bea8ccb1f666cf326082"} Jan 24 07:10:30 crc kubenswrapper[4675]: W0124 07:10:30.137562 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321 WatchSource:0}: Error finding container dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321: Status 404 returned error can't find the container with id dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321 Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.193749 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.194462 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gplkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-rxp64_openstack(46a81ceb-1fea-49de-8e4a-3f4b1dabaa55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:10:30 crc kubenswrapper[4675]: E0124 07:10:30.195681 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" podUID="46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.413836 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321"} Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.667079 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.765589 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 24 07:10:30 crc kubenswrapper[4675]: I0124 07:10:30.897977 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.034581 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.073029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.149753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.149862 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.150380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") pod \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\" (UID: \"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55\") " Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.151491 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.152031 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config" (OuterVolumeSpecName: "config") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.155276 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx" (OuterVolumeSpecName: "kube-api-access-gplkx") pod "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" (UID: "46a81ceb-1fea-49de-8e4a-3f4b1dabaa55"). InnerVolumeSpecName "kube-api-access-gplkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.199554 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.231739 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251835 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gplkx\" (UniqueName: \"kubernetes.io/projected/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-kube-api-access-gplkx\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251859 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.251873 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.396455 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fsln2"] Jan 24 07:10:31 crc kubenswrapper[4675]: W0124 07:10:31.422655 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeda0648_be0d_4fb4_a3a4_42440e47fec0.slice/crio-2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d WatchSource:0}: Error finding container 2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d: Status 404 returned error can't find the container with id 2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.424685 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"fa3a7f0d316a1089a5558c3fea3965d7f92601c58852075ba23fbeab293e2591"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.426605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerStarted","Data":"8fc4ca63f03726f8d4f4612fb16075bb874d642ea255530f0cde869af0c01186"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.431432 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb" event={"ID":"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1","Type":"ContainerStarted","Data":"f4a06e2d0cf6671e04d0709ab0b6d54aed75cf01a0352517cd191de335ae03ed"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.443308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"740cddf5546043e34032d9d1e0dcd3e121c3fb18c86f2b4c15c0e929ce705afd"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.447070 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"3811e761deccebcd7be2da12e763920224c9f8202de3108beef61aea6223d4ec"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.448955 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"e02cfc39376a20ed79af6aa4a70a95d12cb107645ef263fc4bfe2732893da583"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.451027 4675 generic.go:334] "Generic (PLEG): container finished" podID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerID="4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb" exitCode=0 Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.451060 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.483623 4675 generic.go:334] "Generic (PLEG): container finished" podID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerID="db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd" exitCode=0 Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.483716 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerDied","Data":"db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.488427 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" event={"ID":"46a81ceb-1fea-49de-8e4a-3f4b1dabaa55","Type":"ContainerDied","Data":"900476cefd33cf6c819789d882d8f3094b60f9c805b28b72e9fc9d1ff3e62385"} Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.488529 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rxp64" Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.585935 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:31 crc kubenswrapper[4675]: I0124 07:10:31.590579 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rxp64"] Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.139155 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.498584 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"2ff49d9de1968ffa7aedcd9ff18b5ac08e750b31d74923e74433daf307de269d"} Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.501343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerStarted","Data":"f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7"} Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.501628 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.522145 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" podStartSLOduration=3.697337798 podStartE2EDuration="19.52212553s" podCreationTimestamp="2026-01-24 07:10:13 +0000 UTC" firstStartedPulling="2026-01-24 07:10:14.791693774 +0000 UTC m=+1016.087798997" lastFinishedPulling="2026-01-24 07:10:30.616481506 +0000 UTC m=+1031.912586729" observedRunningTime="2026-01-24 07:10:32.517012087 +0000 UTC m=+1033.813117310" watchObservedRunningTime="2026-01-24 07:10:32.52212553 +0000 UTC m=+1033.818230753" Jan 24 07:10:32 crc kubenswrapper[4675]: I0124 07:10:32.964629 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a81ceb-1fea-49de-8e4a-3f4b1dabaa55" path="/var/lib/kubelet/pods/46a81ceb-1fea-49de-8e4a-3f4b1dabaa55/volumes" Jan 24 07:10:34 crc kubenswrapper[4675]: I0124 07:10:34.516964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"62f5fb1b55e2d41e9410e6406c995f2f94bee324c32adb15f7946aa6c918cf36"} Jan 24 07:10:39 crc kubenswrapper[4675]: I0124 07:10:39.347806 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:39 crc kubenswrapper[4675]: I0124 07:10:39.430459 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.145803 4675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 24 07:10:44 crc kubenswrapper[4675]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 07:10:44 crc kubenswrapper[4675]: > podSandboxID="23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242" Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.146324 4675 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 24 07:10:44 crc kubenswrapper[4675]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwc6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-rrp8m_openstack(347ffd58-e301-4dd3-9416-2d6fa5ffdaa7): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 24 07:10:44 crc kubenswrapper[4675]: > logger="UnhandledError" Jan 24 07:10:44 crc kubenswrapper[4675]: E0124 07:10:44.148183 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.594713 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2446e52-3d97-46f2-ac99-4bb1af82d302","Type":"ContainerStarted","Data":"b78347d701dab73f2a9ce94675d3875d5d44005c42b88778f2c4040087df0298"} Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.617986 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.587104302 podStartE2EDuration="27.617966626s" podCreationTimestamp="2026-01-24 07:10:17 +0000 UTC" firstStartedPulling="2026-01-24 07:10:29.046499541 +0000 UTC m=+1030.342604774" lastFinishedPulling="2026-01-24 07:10:40.077361875 +0000 UTC m=+1041.373467098" observedRunningTime="2026-01-24 07:10:44.61312473 +0000 UTC m=+1045.909229963" watchObservedRunningTime="2026-01-24 07:10:44.617966626 +0000 UTC m=+1045.914071849" Jan 24 07:10:44 crc kubenswrapper[4675]: I0124 07:10:44.901029 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.029995 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") pod \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\" (UID: \"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7\") " Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.041129 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h" (OuterVolumeSpecName: "kube-api-access-qwc6h") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "kube-api-access-qwc6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.086269 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config" (OuterVolumeSpecName: "config") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.132114 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.132148 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwc6h\" (UniqueName: \"kubernetes.io/projected/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-kube-api-access-qwc6h\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.191213 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" (UID: "347ffd58-e301-4dd3-9416-2d6fa5ffdaa7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.234235 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" event={"ID":"347ffd58-e301-4dd3-9416-2d6fa5ffdaa7","Type":"ContainerDied","Data":"23bfc54e831fd43388b201fd9b3294d34b2fc6a04e54c0d273f63f0d56af9242"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606631 4675 scope.go:117] "RemoveContainer" containerID="db00382708b7c809b6812f592aaa217f75f3715a6895df49399a3befed2578cd" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.606771 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rrp8m" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.623095 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerStarted","Data":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.624071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.627600 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.629299 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.632964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb" event={"ID":"b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1","Type":"ContainerStarted","Data":"622c76f05e7ba7105e7db895c809f2f5474704e27760f7f3c7dc749d61df9341"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.650451 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2x2kb" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.689422 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.693135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"f82855f31cbb83691523304e204ae6b84c2869b08b4b226e08ae4ccd93488600"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.697791 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"40239ed7d0caf1c06b37d63d9177a204a011895a4300b6c52ecc8ee1f76c5f4a"} Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.697975 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.724302 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2x2kb" podStartSLOduration=10.313119387 podStartE2EDuration="22.724268102s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.256278604 +0000 UTC m=+1032.552383827" lastFinishedPulling="2026-01-24 07:10:43.667427319 +0000 UTC m=+1044.963532542" observedRunningTime="2026-01-24 07:10:45.720274536 +0000 UTC m=+1047.016379759" watchObservedRunningTime="2026-01-24 07:10:45.724268102 +0000 UTC m=+1047.020373325" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.774700 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.019901473 podStartE2EDuration="26.774677095s" podCreationTimestamp="2026-01-24 07:10:19 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.049904288 +0000 UTC m=+1032.346009511" lastFinishedPulling="2026-01-24 07:10:44.80467991 +0000 UTC m=+1046.100785133" observedRunningTime="2026-01-24 07:10:45.768423145 +0000 UTC m=+1047.064528368" watchObservedRunningTime="2026-01-24 07:10:45.774677095 +0000 UTC m=+1047.070782318" Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.839481 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:45 crc kubenswrapper[4675]: I0124 07:10:45.841437 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rrp8m"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.709859 4675 generic.go:334] "Generic (PLEG): container finished" podID="feda0648-be0d-4fb4-a3a4-42440e47fec0" containerID="f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d" exitCode=0 Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.710259 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerDied","Data":"f92666dfa3d23140f8f187d267cdc1bb27ed28fbdc3af8b599e8f1268441877d"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.712481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.715886 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805047 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:46 crc kubenswrapper[4675]: E0124 07:10:46.805335 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805350 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805479 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" containerName="init" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.805987 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.813138 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.861483 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880474 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880617 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.880663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.964801 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347ffd58-e301-4dd3-9416-2d6fa5ffdaa7" path="/var/lib/kubelet/pods/347ffd58-e301-4dd3-9416-2d6fa5ffdaa7/volumes" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.981889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovn-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982260 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982409 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.982428 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.984137 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1e0062ff-7e89-4c55-8796-de1c9e311dd2-ovs-rundir\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.986010 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0062ff-7e89-4c55-8796-de1c9e311dd2-config\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.988232 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.988587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.989828 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:46 crc kubenswrapper[4675]: I0124 07:10:46.993133 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0062ff-7e89-4c55-8796-de1c9e311dd2-combined-ca-bundle\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.006170 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.016128 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.035924 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khc6z\" (UniqueName: \"kubernetes.io/projected/1e0062ff-7e89-4c55-8796-de1c9e311dd2-kube-api-access-khc6z\") pod \"ovn-controller-metrics-b7pft\" (UID: \"1e0062ff-7e89-4c55-8796-de1c9e311dd2\") " pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084729 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.084783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.152003 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7pft" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186438 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.186458 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.187584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.187707 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.188241 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.238803 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"dnsmasq-dns-5bf47b49b7-cfmfx\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.284221 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.284862 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.313180 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.314381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.340512 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.377173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.400927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.401042 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.401323 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502569 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502606 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.502684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.503562 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.505741 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.506308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.507088 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.525688 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"dnsmasq-dns-8554648995-8gnzm\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.692841 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:47 crc kubenswrapper[4675]: I0124 07:10:47.727746 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"aa0a5d8e8aac7f7b87343b538181792fc356629bd801832810e2fb71541e6be6"} Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.775180 4675 generic.go:334] "Generic (PLEG): container finished" podID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerID="fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40" exitCode=0 Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.775279 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerDied","Data":"fccd7353bc9b512dab19a9626b8d14920cde36940e4eb90c09849fdd3c88cc40"} Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.867420 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:10:49 crc kubenswrapper[4675]: W0124 07:10:49.884981 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd380ff5f_2ad7_495e_8cd4_2df178c2cd02.slice/crio-21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b WatchSource:0}: Error finding container 21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b: Status 404 returned error can't find the container with id 21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b Jan 24 07:10:49 crc kubenswrapper[4675]: I0124 07:10:49.961735 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:50 crc kubenswrapper[4675]: W0124 07:10:50.004261 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea20e09_7a89_4bb3_9413_ac6a647743d5.slice/crio-c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05 WatchSource:0}: Error finding container c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05: Status 404 returned error can't find the container with id c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.067018 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7pft"] Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.178277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.799871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7pft" event={"ID":"1e0062ff-7e89-4c55-8796-de1c9e311dd2","Type":"ContainerStarted","Data":"7dfc84b6b922b8d7ec493d0f3e73628bc4680f101d2a7964814de75ae82eded3"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.800428 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7pft" event={"ID":"1e0062ff-7e89-4c55-8796-de1c9e311dd2","Type":"ContainerStarted","Data":"a2ee403aad04d5585208f77253209b388e25840601ddfe9a671d037335a615c7"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805203 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerID="624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805281 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerDied","Data":"624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.805308 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerStarted","Data":"c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831035 4675 generic.go:334] "Generic (PLEG): container finished" podID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerID="09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831166 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.831203 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerStarted","Data":"21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.838430 4675 generic.go:334] "Generic (PLEG): container finished" podID="e189b411-9dd6-496f-a001-41bc90c3fe00" containerID="0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9" exitCode=0 Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.838788 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerDied","Data":"0ae21b832453ed4f327a6995ee446a22269efa1f7b1b840709bec51995212ba9"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.839585 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b7pft" podStartSLOduration=4.839573824 podStartE2EDuration="4.839573824s" podCreationTimestamp="2026-01-24 07:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:10:50.832949425 +0000 UTC m=+1052.129054688" watchObservedRunningTime="2026-01-24 07:10:50.839573824 +0000 UTC m=+1052.135679057" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.843469 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"009254f3-9d76-4d89-8e35-d2b4c4be0da8","Type":"ContainerStarted","Data":"7d706f3d1615592ecc3cfb08c32c63f350963a71eaf64e1c60fc01af77d17b35"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.857628 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19fa54da-8a94-427d-b8c6-0881657d3324","Type":"ContainerStarted","Data":"33c31370ac708fbd811b180b4aec0d14167eefe1861c7fd1a105ad4e8d5bc995"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.860045 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1d973fa-2671-49fe-82f1-1862aa70d784","Type":"ContainerStarted","Data":"77aff0e3913197141e772ab9bf145c2690bf6faeedd85b5dab59a3e557b6bd0c"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.874083 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fsln2" event={"ID":"feda0648-be0d-4fb4-a3a4-42440e47fec0","Type":"ContainerStarted","Data":"8f106c7b12eaf8cb669841601e806e0f5ea0f21c800ba11b14b623b4f2aa41cf"} Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.874513 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.879277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.975736 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.504361822 podStartE2EDuration="25.97569764s" podCreationTimestamp="2026-01-24 07:10:25 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.213137106 +0000 UTC m=+1032.509242319" lastFinishedPulling="2026-01-24 07:10:49.684472874 +0000 UTC m=+1050.980578137" observedRunningTime="2026-01-24 07:10:50.974162313 +0000 UTC m=+1052.270267546" watchObservedRunningTime="2026-01-24 07:10:50.97569764 +0000 UTC m=+1052.271802873" Jan 24 07:10:50 crc kubenswrapper[4675]: I0124 07:10:50.976067 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.455461292 podStartE2EDuration="35.976060249s" podCreationTimestamp="2026-01-24 07:10:15 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.922660655 +0000 UTC m=+1032.218765878" lastFinishedPulling="2026-01-24 07:10:40.443259612 +0000 UTC m=+1041.739364835" observedRunningTime="2026-01-24 07:10:50.939345895 +0000 UTC m=+1052.235451118" watchObservedRunningTime="2026-01-24 07:10:50.976060249 +0000 UTC m=+1052.272165482" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.047889 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.114203235 podStartE2EDuration="28.047873028s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:33.798928419 +0000 UTC m=+1035.095033642" lastFinishedPulling="2026-01-24 07:10:49.732598212 +0000 UTC m=+1051.028703435" observedRunningTime="2026-01-24 07:10:51.047807516 +0000 UTC m=+1052.343912759" watchObservedRunningTime="2026-01-24 07:10:51.047873028 +0000 UTC m=+1052.343978251" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.110764 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fsln2" podStartSLOduration=18.357298261 podStartE2EDuration="28.110745971s" podCreationTimestamp="2026-01-24 07:10:23 +0000 UTC" firstStartedPulling="2026-01-24 07:10:31.430495288 +0000 UTC m=+1032.726600511" lastFinishedPulling="2026-01-24 07:10:41.183942958 +0000 UTC m=+1042.480048221" observedRunningTime="2026-01-24 07:10:51.10909329 +0000 UTC m=+1052.405198513" watchObservedRunningTime="2026-01-24 07:10:51.110745971 +0000 UTC m=+1052.406851194" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.287458 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.300874 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.331071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.380277 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387179 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387546 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.387575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") pod \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\" (UID: \"2ea20e09-7a89-4bb3-9413-ac6a647743d5\") " Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.392647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t" (OuterVolumeSpecName: "kube-api-access-x7x6t") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "kube-api-access-x7x6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.405958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.409529 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config" (OuterVolumeSpecName: "config") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.424424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ea20e09-7a89-4bb3-9413-ac6a647743d5" (UID: "2ea20e09-7a89-4bb3-9413-ac6a647743d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.444050 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489424 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489454 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489464 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ea20e09-7a89-4bb3-9413-ac6a647743d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.489476 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7x6t\" (UniqueName: \"kubernetes.io/projected/2ea20e09-7a89-4bb3-9413-ac6a647743d5-kube-api-access-x7x6t\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885575 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885578 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-cfmfx" event={"ID":"2ea20e09-7a89-4bb3-9413-ac6a647743d5","Type":"ContainerDied","Data":"c96bf28acd52cf83134ac7d60f70f9d99693ae2900076af5c5e5e6b6e4bd6a05"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.885840 4675 scope.go:117] "RemoveContainer" containerID="624d2ca0254053a07dea6ae65d180d0383f2a74d415ec75092f490a0bf16f7ec" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.888872 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerStarted","Data":"b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.889044 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.893132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e189b411-9dd6-496f-a001-41bc90c3fe00","Type":"ContainerStarted","Data":"ac12533ccfb5e6056fb6c87c15914ef5208e4b47626f95614889cacd8b4ba640"} Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.894573 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.894598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.936707 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-8gnzm" podStartSLOduration=4.936684138 podStartE2EDuration="4.936684138s" podCreationTimestamp="2026-01-24 07:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:10:51.926425751 +0000 UTC m=+1053.222531004" watchObservedRunningTime="2026-01-24 07:10:51.936684138 +0000 UTC m=+1053.232789371" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.991544 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 24 07:10:51 crc kubenswrapper[4675]: I0124 07:10:51.994310 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.010112 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.021000 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-cfmfx"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.037686 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.619006118 podStartE2EDuration="36.037670409s" podCreationTimestamp="2026-01-24 07:10:16 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.790361601 +0000 UTC m=+1032.086466824" lastFinishedPulling="2026-01-24 07:10:41.209025852 +0000 UTC m=+1042.505131115" observedRunningTime="2026-01-24 07:10:52.036309957 +0000 UTC m=+1053.332415180" watchObservedRunningTime="2026-01-24 07:10:52.037670409 +0000 UTC m=+1053.333775632" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306018 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:52 crc kubenswrapper[4675]: E0124 07:10:52.306348 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306362 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.306528 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" containerName="init" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.341844 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350095 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6ffzp" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350315 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350507 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.350854 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.361473 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.506997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.507019 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.507077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.607937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608051 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608086 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.608793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.609392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.609500 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-scripts\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.611907 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf62505-a3ad-4c12-a520-4d412d26a71c-config\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612034 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612181 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.612477 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf62505-a3ad-4c12-a520-4d412d26a71c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.632444 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm866\" (UniqueName: \"kubernetes.io/projected/daf62505-a3ad-4c12-a520-4d412d26a71c-kube-api-access-hm866\") pod \"ovn-northd-0\" (UID: \"daf62505-a3ad-4c12-a520-4d412d26a71c\") " pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.689543 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 24 07:10:52 crc kubenswrapper[4675]: I0124 07:10:52.962677 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea20e09-7a89-4bb3-9413-ac6a647743d5" path="/var/lib/kubelet/pods/2ea20e09-7a89-4bb3-9413-ac6a647743d5/volumes" Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.226503 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 24 07:10:53 crc kubenswrapper[4675]: W0124 07:10:53.235509 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaf62505_a3ad_4c12_a520_4d412d26a71c.slice/crio-aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80 WatchSource:0}: Error finding container aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80: Status 404 returned error can't find the container with id aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80 Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.311912 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 24 07:10:53 crc kubenswrapper[4675]: I0124 07:10:53.913700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"aa2abe9eb5a704a3126060f9b44519709ef6c3797e24d190e88cf78411612b80"} Jan 24 07:10:56 crc kubenswrapper[4675]: I0124 07:10:56.662970 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 24 07:10:56 crc kubenswrapper[4675]: I0124 07:10:56.663386 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.694843 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.760142 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.760432 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" containerID="cri-o://f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" gracePeriod=10 Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.885875 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.885908 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.956909 4675 generic.go:334] "Generic (PLEG): container finished" podID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerID="f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" exitCode=0 Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.956968 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7"} Jan 24 07:10:57 crc kubenswrapper[4675]: I0124 07:10:57.961594 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"04d2e939f0ac5cefeea7239fece06b85d9d7a340dc4a96ee172bffd1ef2cd7b6"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.240370 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298569 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298666 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.298703 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") pod \"3186ca49-238e-418a-95e7-f857a9f3bd75\" (UID: \"3186ca49-238e-418a-95e7-f857a9f3bd75\") " Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.304845 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89" (OuterVolumeSpecName: "kube-api-access-msf89") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "kube-api-access-msf89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.333413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.333969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config" (OuterVolumeSpecName: "config") pod "3186ca49-238e-418a-95e7-f857a9f3bd75" (UID: "3186ca49-238e-418a-95e7-f857a9f3bd75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400804 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400843 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3186ca49-238e-418a-95e7-f857a9f3bd75-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.400853 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msf89\" (UniqueName: \"kubernetes.io/projected/3186ca49-238e-418a-95e7-f857a9f3bd75-kube-api-access-msf89\") on node \"crc\" DevicePath \"\"" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.974777 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" event={"ID":"3186ca49-238e-418a-95e7-f857a9f3bd75","Type":"ContainerDied","Data":"68e5c9046a1ff1a25c54ddb1f4fe8acfd229b68840f96e465631f88826708436"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.975118 4675 scope.go:117] "RemoveContainer" containerID="f8e3e1f6d660872a4660f41b43fb3388701507fbf656a5b98d821dd5726064e7" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.975393 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c9krd" Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.983738 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"daf62505-a3ad-4c12-a520-4d412d26a71c","Type":"ContainerStarted","Data":"d1e77680cf5fe5ab2e2141e5c28147de413a5c8c310ece6924d2e9b488398ee2"} Jan 24 07:10:58 crc kubenswrapper[4675]: I0124 07:10:58.984005 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.005916 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.006475 4675 scope.go:117] "RemoveContainer" containerID="4fd0f48bc136df29146a9e239c77e392eeb5ff8cf314ea027a498f9dbf5099cb" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.026571 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c9krd"] Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.033896 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.843905377 podStartE2EDuration="7.033875649s" podCreationTimestamp="2026-01-24 07:10:52 +0000 UTC" firstStartedPulling="2026-01-24 07:10:53.238193032 +0000 UTC m=+1054.534298255" lastFinishedPulling="2026-01-24 07:10:57.428163304 +0000 UTC m=+1058.724268527" observedRunningTime="2026-01-24 07:10:59.020827165 +0000 UTC m=+1060.316932398" watchObservedRunningTime="2026-01-24 07:10:59.033875649 +0000 UTC m=+1060.329980872" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.428000 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 24 07:10:59 crc kubenswrapper[4675]: I0124 07:10:59.525284 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.287868 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:00 crc kubenswrapper[4675]: E0124 07:11:00.288579 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.288596 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: E0124 07:11:00.288619 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="init" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.288627 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="init" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.293032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" containerName="dnsmasq-dns" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.294197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.318705 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336458 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336633 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.336683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437689 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437813 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.437879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.438668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.439020 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.442267 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.442889 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.471761 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"dnsmasq-dns-b8fbc5445-mtp78\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.639516 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:00 crc kubenswrapper[4675]: I0124 07:11:00.950371 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3186ca49-238e-418a-95e7-f857a9f3bd75" path="/var/lib/kubelet/pods/3186ca49-238e-418a-95e7-f857a9f3bd75/volumes" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.076156 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.544930 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.551364 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.551389 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.554262 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.555518 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.556857 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.557587 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sspq9" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745404 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745438 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745550 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745690 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.745803 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846886 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846956 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.846991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847078 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847108 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847216 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847228 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: E0124 07:11:01.847266 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:02.34724994 +0000 UTC m=+1063.643355153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847343 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847362 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-cache\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.847701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cf53054f-7616-43d6-9aeb-eb5f880b6e40-lock\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.852568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf53054f-7616-43d6-9aeb-eb5f880b6e40-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.867831 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:01 crc kubenswrapper[4675]: I0124 07:11:01.875795 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7tp\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-kube-api-access-fn7tp\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013700 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1e65888-5032-411e-8910-5438e0aff32f" containerID="ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d" exitCode=0 Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013807 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d"} Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.013838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerStarted","Data":"e4e440f949a16c7c92a1572ceb6020eb2c0abbdd347846f7e3ad225704016290"} Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.034536 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.142089 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 24 07:11:02 crc kubenswrapper[4675]: I0124 07:11:02.359084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359276 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359292 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:02 crc kubenswrapper[4675]: E0124 07:11:02.359344 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:03.359325733 +0000 UTC m=+1064.655430956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.023081 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerStarted","Data":"c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41"} Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.044224 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podStartSLOduration=3.044205557 podStartE2EDuration="3.044205557s" podCreationTimestamp="2026-01-24 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:03.040981649 +0000 UTC m=+1064.337086882" watchObservedRunningTime="2026-01-24 07:11:03.044205557 +0000 UTC m=+1064.340310781" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.377520 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377663 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377690 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: E0124 07:11:03.377759 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:05.377741105 +0000 UTC m=+1066.673846328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.572093 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.573092 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.579438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.584809 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.597373 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.628493 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.635641 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.684381 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.684498 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785599 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785667 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785700 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.785759 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.786571 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.804743 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"glance-e5bb-account-create-update-r9xsl\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.886827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.886879 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.887545 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.900576 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.903878 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"glance-db-create-gqpfm\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:03 crc kubenswrapper[4675]: I0124 07:11:03.920034 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.037789 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.522100 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:11:04 crc kubenswrapper[4675]: I0124 07:11:04.619710 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:11:04 crc kubenswrapper[4675]: W0124 07:11:04.638117 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147543ec_f687_430c_8a42_547c5861dbf4.slice/crio-e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07 WatchSource:0}: Error finding container e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07: Status 404 returned error can't find the container with id e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07 Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.047481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerStarted","Data":"9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.047865 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerStarted","Data":"cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.049648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerStarted","Data":"974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.049712 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerStarted","Data":"e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07"} Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.080064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e5bb-account-create-update-r9xsl" podStartSLOduration=2.080045175 podStartE2EDuration="2.080045175s" podCreationTimestamp="2026-01-24 07:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:05.067513103 +0000 UTC m=+1066.363618326" watchObservedRunningTime="2026-01-24 07:11:05.080045175 +0000 UTC m=+1066.376150398" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.084446 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gqpfm" podStartSLOduration=2.08443359 podStartE2EDuration="2.08443359s" podCreationTimestamp="2026-01-24 07:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:05.078438115 +0000 UTC m=+1066.374543338" watchObservedRunningTime="2026-01-24 07:11:05.08443359 +0000 UTC m=+1066.380538813" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.304163 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.305398 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.310957 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.313931 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.412477 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.414321 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.416192 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.417678 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421829 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421869 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.421925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422375 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422397 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.422439 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:09.422424364 +0000 UTC m=+1070.718529587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.425846 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.462568 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.463578 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.470519 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: E0124 07:11:05.471174 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-w4vn4 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-zjwxg" podUID="30abf472-a311-44dd-9853-cace1a1c41a9" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.480038 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526490 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526546 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526655 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526756 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526792 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526820 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.526867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.527830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.549421 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"root-account-create-update-wmshq\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.559428 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.622298 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628222 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628276 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628385 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628418 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628462 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628482 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628501 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628523 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628613 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.628735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.629459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.629644 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.630919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.633096 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.635309 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.637933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.656917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"swift-ring-rebalance-zjwxg\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730583 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730673 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730700 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730749 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730833 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.730884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.731662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.732649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.733177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.746426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.747327 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.747440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.748271 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"swift-ring-rebalance-sz46b\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:05 crc kubenswrapper[4675]: I0124 07:11:05.785476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.061360 4675 generic.go:334] "Generic (PLEG): container finished" podID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerID="9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df" exitCode=0 Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.061417 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerDied","Data":"9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df"} Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063810 4675 generic.go:334] "Generic (PLEG): container finished" podID="147543ec-f687-430c-8a42-547c5861dbf4" containerID="974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2" exitCode=0 Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.063899 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerDied","Data":"974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2"} Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.081173 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.093850 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239759 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239894 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239937 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.239969 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240074 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") pod \"30abf472-a311-44dd-9853-cace1a1c41a9\" (UID: \"30abf472-a311-44dd-9853-cace1a1c41a9\") " Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.240763 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts" (OuterVolumeSpecName: "scripts") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.241581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.242448 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.245082 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4" (OuterVolumeSpecName: "kube-api-access-w4vn4") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "kube-api-access-w4vn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.246700 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.247840 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.255337 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30abf472-a311-44dd-9853-cace1a1c41a9" (UID: "30abf472-a311-44dd-9853-cace1a1c41a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.260529 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sz46b"] Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341570 4675 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341615 4675 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341627 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4vn4\" (UniqueName: \"kubernetes.io/projected/30abf472-a311-44dd-9853-cace1a1c41a9-kube-api-access-w4vn4\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341641 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341652 4675 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30abf472-a311-44dd-9853-cace1a1c41a9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341662 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30abf472-a311-44dd-9853-cace1a1c41a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:06 crc kubenswrapper[4675]: I0124 07:11:06.341672 4675 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30abf472-a311-44dd-9853-cace1a1c41a9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077436 4675 generic.go:334] "Generic (PLEG): container finished" podID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerID="cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848" exitCode=0 Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077629 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerDied","Data":"cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.077839 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerStarted","Data":"89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.081158 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerStarted","Data":"ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8"} Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.081220 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zjwxg" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.136374 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.142087 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-zjwxg"] Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.661020 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.668347 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.762821 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765849 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") pod \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765896 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") pod \"147543ec-f687-430c-8a42-547c5861dbf4\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.765955 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") pod \"147543ec-f687-430c-8a42-547c5861dbf4\" (UID: \"147543ec-f687-430c-8a42-547c5861dbf4\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766014 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") pod \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\" (UID: \"ade78eac-6799-49f4-b0ea-2f3dcb21273e\") " Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766632 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147543ec-f687-430c-8a42-547c5861dbf4" (UID: "147543ec-f687-430c-8a42-547c5861dbf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.766678 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ade78eac-6799-49f4-b0ea-2f3dcb21273e" (UID: "ade78eac-6799-49f4-b0ea-2f3dcb21273e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.787289 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2" (OuterVolumeSpecName: "kube-api-access-bhpf2") pod "ade78eac-6799-49f4-b0ea-2f3dcb21273e" (UID: "ade78eac-6799-49f4-b0ea-2f3dcb21273e"). InnerVolumeSpecName "kube-api-access-bhpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.795178 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt" (OuterVolumeSpecName: "kube-api-access-czznt") pod "147543ec-f687-430c-8a42-547c5861dbf4" (UID: "147543ec-f687-430c-8a42-547c5861dbf4"). InnerVolumeSpecName "kube-api-access-czznt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868343 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhpf2\" (UniqueName: \"kubernetes.io/projected/ade78eac-6799-49f4-b0ea-2f3dcb21273e-kube-api-access-bhpf2\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868384 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czznt\" (UniqueName: \"kubernetes.io/projected/147543ec-f687-430c-8a42-547c5861dbf4-kube-api-access-czznt\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868394 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147543ec-f687-430c-8a42-547c5861dbf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.868403 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade78eac-6799-49f4-b0ea-2f3dcb21273e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.925729 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:07 crc kubenswrapper[4675]: E0124 07:11:07.926167 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.926182 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: E0124 07:11:07.926202 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.926210 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.930086 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" containerName="mariadb-account-create-update" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.930124 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="147543ec-f687-430c-8a42-547c5861dbf4" containerName="mariadb-database-create" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.936392 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:07 crc kubenswrapper[4675]: I0124 07:11:07.964239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.021265 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.022178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.023812 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.043745 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.071378 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.071899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.098564 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e5bb-account-create-update-r9xsl" event={"ID":"ade78eac-6799-49f4-b0ea-2f3dcb21273e","Type":"ContainerDied","Data":"cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57"} Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.098599 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc90ef57ed2f268af367d04ae88b666b28bec4ef07715447864520bd64348a57" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.099185 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e5bb-account-create-update-r9xsl" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gqpfm" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gqpfm" event={"ID":"147543ec-f687-430c-8a42-547c5861dbf4","Type":"ContainerDied","Data":"e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07"} Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.103837 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e974e7137e990a8ee4f0ea9197e564147f42d5e1ef614ef9871d8fa4811b5b07" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173073 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173160 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173192 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173223 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.173937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.213983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"keystone-db-create-s7r45\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.232033 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.235008 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.253647 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.259183 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.275094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.275142 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.277382 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.296417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"keystone-1e77-account-create-update-7b985\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.341737 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.361762 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.364262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.368385 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.376242 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.376354 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.384852 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477399 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477469 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.477566 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.480880 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.507621 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"placement-db-create-zh8n7\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.569090 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.579038 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.579185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.580594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.599308 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"placement-1ef3-account-create-update-txcmj\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.685509 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:08 crc kubenswrapper[4675]: I0124 07:11:08.958920 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30abf472-a311-44dd-9853-cace1a1c41a9" path="/var/lib/kubelet/pods/30abf472-a311-44dd-9853-cace1a1c41a9/volumes" Jan 24 07:11:09 crc kubenswrapper[4675]: I0124 07:11:09.493304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493548 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493561 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:09 crc kubenswrapper[4675]: E0124 07:11:09.493603 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:17.493586787 +0000 UTC m=+1078.789692010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.030859 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wmshq" event={"ID":"7cb298f5-b4c8-42df-8a3f-c1458b89e443","Type":"ContainerDied","Data":"89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1"} Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133532 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bf972e71483a180f3e39a4363040b61fbdded1daff7ef3862df0951bf7d9c1" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.133599 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wmshq" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.204884 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") pod \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.205015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") pod \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\" (UID: \"7cb298f5-b4c8-42df-8a3f-c1458b89e443\") " Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.206513 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cb298f5-b4c8-42df-8a3f-c1458b89e443" (UID: "7cb298f5-b4c8-42df-8a3f-c1458b89e443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.226360 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86" (OuterVolumeSpecName: "kube-api-access-jkb86") pod "7cb298f5-b4c8-42df-8a3f-c1458b89e443" (UID: "7cb298f5-b4c8-42df-8a3f-c1458b89e443"). InnerVolumeSpecName "kube-api-access-jkb86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.307274 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkb86\" (UniqueName: \"kubernetes.io/projected/7cb298f5-b4c8-42df-8a3f-c1458b89e443-kube-api-access-jkb86\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.307308 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb298f5-b4c8-42df-8a3f-c1458b89e443-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.642441 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.699378 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:10 crc kubenswrapper[4675]: I0124 07:11:10.699593 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-8gnzm" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" containerID="cri-o://b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" gracePeriod=10 Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.146195 4675 generic.go:334] "Generic (PLEG): container finished" podID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerID="b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" exitCode=0 Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.146210 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91"} Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.553088 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.565381 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wmshq"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.815975 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:11:11 crc kubenswrapper[4675]: W0124 07:11:11.906023 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66b11fd_5bd9_4ba0_bd60_b370a709be63.slice/crio-25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe WatchSource:0}: Error finding container 25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe: Status 404 returned error can't find the container with id 25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.907908 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935763 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935887 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935952 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.935999 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") pod \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\" (UID: \"d380ff5f-2ad7-495e-8cd4-2df178c2cd02\") " Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.944898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5" (OuterVolumeSpecName: "kube-api-access-2sgk5") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "kube-api-access-2sgk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.979853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.984756 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.987565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:11 crc kubenswrapper[4675]: I0124 07:11:11.991423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config" (OuterVolumeSpecName: "config") pod "d380ff5f-2ad7-495e-8cd4-2df178c2cd02" (UID: "d380ff5f-2ad7-495e-8cd4-2df178c2cd02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038234 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038469 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038486 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sgk5\" (UniqueName: \"kubernetes.io/projected/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-kube-api-access-2sgk5\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038495 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.038504 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d380ff5f-2ad7-495e-8cd4-2df178c2cd02-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.055451 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.065758 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:11:12 crc kubenswrapper[4675]: W0124 07:11:12.082071 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b2533f_cb15_4581_84c1_81235b34bfe5.slice/crio-9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d WatchSource:0}: Error finding container 9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d: Status 404 returned error can't find the container with id 9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.103209 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:11:12 crc kubenswrapper[4675]: W0124 07:11:12.107689 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45f87016_197d_4a38_94d7_4c7828af8ee3.slice/crio-433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1 WatchSource:0}: Error finding container 433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1: Status 404 returned error can't find the container with id 433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1 Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.165577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerStarted","Data":"1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.172191 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerStarted","Data":"47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.173868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerStarted","Data":"76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.173898 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerStarted","Data":"25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.175572 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerStarted","Data":"433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.176669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerStarted","Data":"9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.185609 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sz46b" podStartSLOduration=2.02857011 podStartE2EDuration="7.185594336s" podCreationTimestamp="2026-01-24 07:11:05 +0000 UTC" firstStartedPulling="2026-01-24 07:11:06.267518364 +0000 UTC m=+1067.563623587" lastFinishedPulling="2026-01-24 07:11:11.42454259 +0000 UTC m=+1072.720647813" observedRunningTime="2026-01-24 07:11:12.178614458 +0000 UTC m=+1073.474719681" watchObservedRunningTime="2026-01-24 07:11:12.185594336 +0000 UTC m=+1073.481699559" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-8gnzm" event={"ID":"d380ff5f-2ad7-495e-8cd4-2df178c2cd02","Type":"ContainerDied","Data":"21e3c05cf504ff346743ae080d530a42394925a2cfab5c9696da02df66a7400b"} Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187571 4675 scope.go:117] "RemoveContainer" containerID="b564f9e16029d5ca253f041ec1a749aa789ea08cb8ab0c837db92a65d8468a91" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.187703 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-8gnzm" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.201222 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1ef3-account-create-update-txcmj" podStartSLOduration=4.201209523 podStartE2EDuration="4.201209523s" podCreationTimestamp="2026-01-24 07:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:12.198454086 +0000 UTC m=+1073.494559309" watchObservedRunningTime="2026-01-24 07:11:12.201209523 +0000 UTC m=+1073.497314736" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.215148 4675 scope.go:117] "RemoveContainer" containerID="09845de0d6cfe71d82646bcb54bb06d9aef2cb9d4719b6cc5abade59a5012412" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.232532 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.241426 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-8gnzm"] Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.955247 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" path="/var/lib/kubelet/pods/7cb298f5-b4c8-42df-8a3f-c1458b89e443/volumes" Jan 24 07:11:12 crc kubenswrapper[4675]: I0124 07:11:12.956702 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" path="/var/lib/kubelet/pods/d380ff5f-2ad7-495e-8cd4-2df178c2cd02/volumes" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.201326 4675 generic.go:334] "Generic (PLEG): container finished" podID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerID="cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.201443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerDied","Data":"cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.205914 4675 generic.go:334] "Generic (PLEG): container finished" podID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerID="f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.205997 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerDied","Data":"f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.207956 4675 generic.go:334] "Generic (PLEG): container finished" podID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerID="76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.207990 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerDied","Data":"76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.209788 4675 generic.go:334] "Generic (PLEG): container finished" podID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerID="2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e" exitCode=0 Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.209868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerDied","Data":"2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e"} Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644054 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644452 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644468 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644478 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644485 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: E0124 07:11:13.644501 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="init" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644507 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="init" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644662 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d380ff5f-2ad7-495e-8cd4-2df178c2cd02" containerName="dnsmasq-dns" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.644674 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb298f5-b4c8-42df-8a3f-c1458b89e443" containerName="mariadb-account-create-update" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.645158 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.646872 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.647097 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wwtw8" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.667748 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.772893 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773030 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773087 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.773152 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.874985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875048 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.875177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.880673 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.881791 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.886599 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.893918 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"glance-db-sync-95xkb\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:13 crc kubenswrapper[4675]: I0124 07:11:13.968997 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.510189 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.608940 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.689478 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") pod \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.689559 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") pod \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\" (UID: \"f66b11fd-5bd9-4ba0-bd60-b370a709be63\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.690712 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f66b11fd-5bd9-4ba0-bd60-b370a709be63" (UID: "f66b11fd-5bd9-4ba0-bd60-b370a709be63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.701423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg" (OuterVolumeSpecName: "kube-api-access-5pxhg") pod "f66b11fd-5bd9-4ba0-bd60-b370a709be63" (UID: "f66b11fd-5bd9-4ba0-bd60-b370a709be63"). InnerVolumeSpecName "kube-api-access-5pxhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.703079 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.714642 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.791860 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") pod \"45f87016-197d-4a38-94d7-4c7828af8ee3\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.791944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") pod \"45f87016-197d-4a38-94d7-4c7828af8ee3\" (UID: \"45f87016-197d-4a38-94d7-4c7828af8ee3\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.792321 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f66b11fd-5bd9-4ba0-bd60-b370a709be63-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.792332 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxhg\" (UniqueName: \"kubernetes.io/projected/f66b11fd-5bd9-4ba0-bd60-b370a709be63-kube-api-access-5pxhg\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.793025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45f87016-197d-4a38-94d7-4c7828af8ee3" (UID: "45f87016-197d-4a38-94d7-4c7828af8ee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.795920 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx" (OuterVolumeSpecName: "kube-api-access-l2sfx") pod "45f87016-197d-4a38-94d7-4c7828af8ee3" (UID: "45f87016-197d-4a38-94d7-4c7828af8ee3"). InnerVolumeSpecName "kube-api-access-l2sfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.813111 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892864 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") pod \"33b2533f-cb15-4581-84c1-81235b34bfe5\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892921 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") pod \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892944 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") pod \"33b2533f-cb15-4581-84c1-81235b34bfe5\" (UID: \"33b2533f-cb15-4581-84c1-81235b34bfe5\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.892990 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") pod \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\" (UID: \"5b33dcb3-da61-44f3-9666-2b4afb90b9cd\") " Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893300 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45f87016-197d-4a38-94d7-4c7828af8ee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893316 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sfx\" (UniqueName: \"kubernetes.io/projected/45f87016-197d-4a38-94d7-4c7828af8ee3-kube-api-access-l2sfx\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.893984 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b33dcb3-da61-44f3-9666-2b4afb90b9cd" (UID: "5b33dcb3-da61-44f3-9666-2b4afb90b9cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.894387 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33b2533f-cb15-4581-84c1-81235b34bfe5" (UID: "33b2533f-cb15-4581-84c1-81235b34bfe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.902161 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r" (OuterVolumeSpecName: "kube-api-access-fhn6r") pod "5b33dcb3-da61-44f3-9666-2b4afb90b9cd" (UID: "5b33dcb3-da61-44f3-9666-2b4afb90b9cd"). InnerVolumeSpecName "kube-api-access-fhn6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:14 crc kubenswrapper[4675]: I0124 07:11:14.902693 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp" (OuterVolumeSpecName: "kube-api-access-6mchp") pod "33b2533f-cb15-4581-84c1-81235b34bfe5" (UID: "33b2533f-cb15-4581-84c1-81235b34bfe5"). InnerVolumeSpecName "kube-api-access-6mchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046340 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b2533f-cb15-4581-84c1-81235b34bfe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046402 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046413 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mchp\" (UniqueName: \"kubernetes.io/projected/33b2533f-cb15-4581-84c1-81235b34bfe5-kube-api-access-6mchp\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.046428 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhn6r\" (UniqueName: \"kubernetes.io/projected/5b33dcb3-da61-44f3-9666-2b4afb90b9cd-kube-api-access-fhn6r\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.286609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerStarted","Data":"aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s7r45" event={"ID":"33b2533f-cb15-4581-84c1-81235b34bfe5","Type":"ContainerDied","Data":"9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288322 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a715cf6c14d67b86c4fde2dc3a0bbca99678dbef210c14c8b94f5f898b1c93d" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.288440 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s7r45" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289546 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e77-account-create-update-7b985" event={"ID":"5b33dcb3-da61-44f3-9666-2b4afb90b9cd","Type":"ContainerDied","Data":"47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289578 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e09f6052ac6920e57c50185bbdc16b1a1efdb54f1858d643790403835da9ca" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.289640 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e77-account-create-update-7b985" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295198 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ef3-account-create-update-txcmj" event={"ID":"f66b11fd-5bd9-4ba0-bd60-b370a709be63","Type":"ContainerDied","Data":"25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295223 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d9a2ece04b3f2bc31986afc779539391ad52f7e854e2010c215d9f249d8bbe" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.295269 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ef3-account-create-update-txcmj" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zh8n7" event={"ID":"45f87016-197d-4a38-94d7-4c7828af8ee3","Type":"ContainerDied","Data":"433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1"} Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297544 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433bd184e908b3bc8287e83e9be377f9f2ff0b347e8b23d98c413ad90c9db8a1" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.297583 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zh8n7" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321818 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321834 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321856 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321863 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321871 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321878 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: E0124 07:11:15.321890 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.321895 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322044 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322051 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" containerName="mariadb-database-create" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322059 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" containerName="mariadb-account-create-update" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.322515 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.328138 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.342886 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.351794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.351857 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.452852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.452973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.455060 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.468979 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"root-account-create-update-h2rs4\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:15 crc kubenswrapper[4675]: I0124 07:11:15.642023 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.085860 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:16 crc kubenswrapper[4675]: W0124 07:11:16.095760 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb WatchSource:0}: Error finding container 1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb: Status 404 returned error can't find the container with id 1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.306962 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerStarted","Data":"d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b"} Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.307301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerStarted","Data":"1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb"} Jan 24 07:11:16 crc kubenswrapper[4675]: I0124 07:11:16.323328 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h2rs4" podStartSLOduration=1.323312311 podStartE2EDuration="1.323312311s" podCreationTimestamp="2026-01-24 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:16.321969169 +0000 UTC m=+1077.618074392" watchObservedRunningTime="2026-01-24 07:11:16.323312311 +0000 UTC m=+1077.619417524" Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.316176 4675 generic.go:334] "Generic (PLEG): container finished" podID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerID="d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b" exitCode=0 Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.316218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerDied","Data":"d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b"} Jan 24 07:11:17 crc kubenswrapper[4675]: I0124 07:11:17.586277 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586468 4675 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586498 4675 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 24 07:11:17 crc kubenswrapper[4675]: E0124 07:11:17.586549 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift podName:cf53054f-7616-43d6-9aeb-eb5f880b6e40 nodeName:}" failed. No retries permitted until 2026-01-24 07:11:33.586531743 +0000 UTC m=+1094.882636966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift") pod "swift-storage-0" (UID: "cf53054f-7616-43d6-9aeb-eb5f880b6e40") : configmap "swift-ring-files" not found Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.336105 4675 generic.go:334] "Generic (PLEG): container finished" podID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerID="78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761" exitCode=0 Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.336181 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761"} Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.340190 4675 generic.go:334] "Generic (PLEG): container finished" podID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" exitCode=0 Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.340343 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.614024 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:18 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:18 crc kubenswrapper[4675]: > Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.663235 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.698747 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.803945 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") pod \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.804071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") pod \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\" (UID: \"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad\") " Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.804648 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" (UID: "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.809485 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q" (OuterVolumeSpecName: "kube-api-access-nzf4q") pod "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" (UID: "42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad"). InnerVolumeSpecName "kube-api-access-nzf4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.905228 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:18 crc kubenswrapper[4675]: I0124 07:11:18.905268 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzf4q\" (UniqueName: \"kubernetes.io/projected/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad-kube-api-access-nzf4q\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.360854 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerStarted","Data":"8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.361086 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.373893 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerStarted","Data":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.374130 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376302 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2rs4" event={"ID":"42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad","Type":"ContainerDied","Data":"1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb"} Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376368 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.376326 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2rs4" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.435206 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=56.135828106 podStartE2EDuration="1m6.435187326s" podCreationTimestamp="2026-01-24 07:10:13 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.144107867 +0000 UTC m=+1031.440213090" lastFinishedPulling="2026-01-24 07:10:40.443467067 +0000 UTC m=+1041.739572310" observedRunningTime="2026-01-24 07:11:19.431171609 +0000 UTC m=+1080.727276832" watchObservedRunningTime="2026-01-24 07:11:19.435187326 +0000 UTC m=+1080.731292549" Jan 24 07:11:19 crc kubenswrapper[4675]: I0124 07:11:19.439274 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.93360354 podStartE2EDuration="1m5.439258184s" podCreationTimestamp="2026-01-24 07:10:14 +0000 UTC" firstStartedPulling="2026-01-24 07:10:30.705627582 +0000 UTC m=+1032.001732805" lastFinishedPulling="2026-01-24 07:10:41.211282196 +0000 UTC m=+1042.507387449" observedRunningTime="2026-01-24 07:11:19.399306373 +0000 UTC m=+1080.695411616" watchObservedRunningTime="2026-01-24 07:11:19.439258184 +0000 UTC m=+1080.735363407" Jan 24 07:11:20 crc kubenswrapper[4675]: I0124 07:11:20.405460 4675 generic.go:334] "Generic (PLEG): container finished" podID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerID="1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c" exitCode=0 Jan 24 07:11:20 crc kubenswrapper[4675]: I0124 07:11:20.405520 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerDied","Data":"1bfb5aaca42be58bec27fbd4186467ef6026e0f9acdf2a15909cb65d6b4b387c"} Jan 24 07:11:21 crc kubenswrapper[4675]: I0124 07:11:21.569312 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:21 crc kubenswrapper[4675]: I0124 07:11:21.585886 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h2rs4"] Jan 24 07:11:22 crc kubenswrapper[4675]: I0124 07:11:22.959830 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" path="/var/lib/kubelet/pods/42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad/volumes" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.578020 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:23 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:23 crc kubenswrapper[4675]: > Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.610781 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fsln2" Jan 24 07:11:23 crc kubenswrapper[4675]: E0124 07:11:23.823032 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.851510 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:23 crc kubenswrapper[4675]: E0124 07:11:23.851872 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.851889 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.852121 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="42600dc3-a4f8-45bf-9bdf-4fb8a52f6aad" containerName="mariadb-account-create-update" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.852706 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.857335 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.859208 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990442 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990659 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:23 crc kubenswrapper[4675]: I0124 07:11:23.990704 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092235 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092628 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.092692 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.093063 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.094473 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.145690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"ovn-controller-2x2kb-config-9whd8\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:24 crc kubenswrapper[4675]: I0124 07:11:24.175371 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.588894 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.590281 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.595015 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.616306 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.732543 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.732608 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.833906 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.833988 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.834844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.855668 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"root-account-create-update-8k7rv\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:26 crc kubenswrapper[4675]: I0124 07:11:26.913777 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:28 crc kubenswrapper[4675]: I0124 07:11:28.582585 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2x2kb" podUID="b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1" containerName="ovn-controller" probeResult="failure" output=< Jan 24 07:11:28 crc kubenswrapper[4675]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 24 07:11:28 crc kubenswrapper[4675]: > Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.674755 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.675248 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6jbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-95xkb_openstack(7e53c5a1-6293-46d9-9783-e7d183050152): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:11:31 crc kubenswrapper[4675]: E0124 07:11:31.676422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-95xkb" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.788510 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918345 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918435 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918459 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918575 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.918652 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") pod \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\" (UID: \"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8\") " Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.920316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.920523 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.939259 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.951004 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7" (OuterVolumeSpecName: "kube-api-access-nzzv7") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "kube-api-access-nzzv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:31 crc kubenswrapper[4675]: I0124 07:11:31.974075 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts" (OuterVolumeSpecName: "scripts") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.012860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.017266 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" (UID: "57da3a87-eeeb-47c8-b1bd-6a160dd81ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020286 4675 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020306 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020320 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzv7\" (UniqueName: \"kubernetes.io/projected/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-kube-api-access-nzzv7\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020330 4675 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020340 4675 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020350 4675 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.020361 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57da3a87-eeeb-47c8-b1bd-6a160dd81ff8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.191362 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:32 crc kubenswrapper[4675]: W0124 07:11:32.196958 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099e300a_0fba_4964_9d0b_34124522c8f3.slice/crio-2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122 WatchSource:0}: Error finding container 2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122: Status 404 returned error can't find the container with id 2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122 Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.251098 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:11:32 crc kubenswrapper[4675]: W0124 07:11:32.255343 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c46b58_28e2_4896_8ae5_dc53cbe96ec9.slice/crio-3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717 WatchSource:0}: Error finding container 3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717: Status 404 returned error can't find the container with id 3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717 Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.507673 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerStarted","Data":"1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.508750 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerStarted","Data":"3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sz46b" event={"ID":"57da3a87-eeeb-47c8-b1bd-6a160dd81ff8","Type":"ContainerDied","Data":"ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510157 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebac677078572d2fe1c4d4efa213085362240fb07f0ab2327d75b7ba1eb6c2d8" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.510159 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sz46b" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.511856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerStarted","Data":"874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a"} Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.511902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerStarted","Data":"2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122"} Jan 24 07:11:32 crc kubenswrapper[4675]: E0124 07:11:32.513188 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-95xkb" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.532918 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8k7rv" podStartSLOduration=6.532902934 podStartE2EDuration="6.532902934s" podCreationTimestamp="2026-01-24 07:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:32.526872239 +0000 UTC m=+1093.822977462" watchObservedRunningTime="2026-01-24 07:11:32.532902934 +0000 UTC m=+1093.829008157" Jan 24 07:11:32 crc kubenswrapper[4675]: I0124 07:11:32.573164 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2x2kb-config-9whd8" podStartSLOduration=9.573145563 podStartE2EDuration="9.573145563s" podCreationTimestamp="2026-01-24 07:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:32.563256734 +0000 UTC m=+1093.859361967" watchObservedRunningTime="2026-01-24 07:11:32.573145563 +0000 UTC m=+1093.869250786" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.518981 4675 generic.go:334] "Generic (PLEG): container finished" podID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerID="1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c" exitCode=0 Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.519088 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerDied","Data":"1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c"} Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.521048 4675 generic.go:334] "Generic (PLEG): container finished" podID="099e300a-0fba-4964-9d0b-34124522c8f3" containerID="874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a" exitCode=0 Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.521094 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2x2kb-config-9whd8" event={"ID":"099e300a-0fba-4964-9d0b-34124522c8f3","Type":"ContainerDied","Data":"874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a"} Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.588151 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2x2kb" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.649931 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.660190 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cf53054f-7616-43d6-9aeb-eb5f880b6e40-etc-swift\") pod \"swift-storage-0\" (UID: \"cf53054f-7616-43d6-9aeb-eb5f880b6e40\") " pod="openstack/swift-storage-0" Jan 24 07:11:33 crc kubenswrapper[4675]: I0124 07:11:33.675951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 24 07:11:34 crc kubenswrapper[4675]: E0124 07:11:34.051009 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.258703 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.529331 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"f74c1c5ecd4e950d117cb862a809c63f4eafa20c3cc7e0b767b66111f9f43cc0"} Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.863065 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.872530 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") pod \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973489 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973497 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run" (OuterVolumeSpecName: "var-run") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973612 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") pod \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\" (UID: \"38c46b58-28e2-4896-8ae5-dc53cbe96ec9\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973671 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") pod \"099e300a-0fba-4964-9d0b-34124522c8f3\" (UID: \"099e300a-0fba-4964-9d0b-34124522c8f3\") " Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973969 4675 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973641 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.973993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.974552 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.974806 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts" (OuterVolumeSpecName: "scripts") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.975132 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38c46b58-28e2-4896-8ae5-dc53cbe96ec9" (UID: "38c46b58-28e2-4896-8ae5-dc53cbe96ec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.979241 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv" (OuterVolumeSpecName: "kube-api-access-7mhtv") pod "099e300a-0fba-4964-9d0b-34124522c8f3" (UID: "099e300a-0fba-4964-9d0b-34124522c8f3"). InnerVolumeSpecName "kube-api-access-7mhtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:34 crc kubenswrapper[4675]: I0124 07:11:34.979656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2" (OuterVolumeSpecName: "kube-api-access-pjbr2") pod "38c46b58-28e2-4896-8ae5-dc53cbe96ec9" (UID: "38c46b58-28e2-4896-8ae5-dc53cbe96ec9"). InnerVolumeSpecName "kube-api-access-pjbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075207 4675 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075241 4675 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075250 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099e300a-0fba-4964-9d0b-34124522c8f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075264 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-kube-api-access-pjbr2\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075275 4675 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099e300a-0fba-4964-9d0b-34124522c8f3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075283 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c46b58-28e2-4896-8ae5-dc53cbe96ec9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.075291 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mhtv\" (UniqueName: \"kubernetes.io/projected/099e300a-0fba-4964-9d0b-34124522c8f3-kube-api-access-7mhtv\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.134902 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.368536 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.375183 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2x2kb-config-9whd8"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.534886 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549359 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8k7rv" event={"ID":"38c46b58-28e2-4896-8ae5-dc53cbe96ec9","Type":"ContainerDied","Data":"3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717"} Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549395 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f27e7a1de19a3663d515cab7eb208f074ff2aedcaf28dcfba54a96ec83f8717" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.549453 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8k7rv" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.564434 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c343fcde01b25a8f7d8ab2aa7d492ddd249226f04d2356230412270392cf122" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.564499 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2x2kb-config-9whd8" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.816982 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.823940 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824166 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.824262 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824335 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: E0124 07:11:35.824453 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824534 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824788 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="57da3a87-eeeb-47c8-b1bd-6a160dd81ff8" containerName="swift-ring-rebalance" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.824909 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" containerName="mariadb-account-create-update" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.825032 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" containerName="ovn-config" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.825799 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.907410 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.932372 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.933266 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.954229 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.987986 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.988298 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:35 crc kubenswrapper[4675]: I0124 07:11:35.994862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090193 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090354 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.090386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.091271 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.106587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cinder-db-create-6lfkb\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.141251 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.144119 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.152086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.153631 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.154630 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.169232 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.181706 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.189685 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.191844 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.191898 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.192778 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.243015 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"cinder-391e-account-create-update-r55gs\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.244758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.281297 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.282472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.293542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.295763 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.295922 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.296339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.296841 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.366035 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.367250 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371165 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371498 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.371784 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.372069 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.383330 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402157 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402703 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.402846 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.404004 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.404538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.428236 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"barbican-db-create-5zwrb\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.435233 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"barbican-ffb8-account-create-update-2lngf\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.456703 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.457863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.461234 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.470668 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.476455 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.491030 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515734 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.515867 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.516582 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.520027 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.516540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.558781 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"neutron-db-create-bbqrz\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.599441 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.621647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622041 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.622382 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.626384 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.627279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.642913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"keystone-db-sync-ttgww\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.684965 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.724499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.724548 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.725281 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.754202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"neutron-4de6-account-create-update-vzw5r\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.815510 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:36 crc kubenswrapper[4675]: I0124 07:11:36.952115 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099e300a-0fba-4964-9d0b-34124522c8f3" path="/var/lib/kubelet/pods/099e300a-0fba-4964-9d0b-34124522c8f3/volumes" Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.629481 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.629905 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:11:38 crc kubenswrapper[4675]: I0124 07:11:38.892947 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.177827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.279681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.302591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.394518 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.600863 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.607076 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerStarted","Data":"30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.613448 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"0617920e754350b9c06bb7df18005b6b1daa3ce8d49d79b6c375ffc8459c806a"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.613567 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"2cd35c0f19aeeb4bab8e72603ad63dd854416651c80bd39ad835cc64dce64d41"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.615280 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.622206 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerStarted","Data":"6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.623619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerStarted","Data":"243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.624985 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerStarted","Data":"bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.625009 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerStarted","Data":"e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.627470 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerStarted","Data":"03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e"} Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.627524 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerStarted","Data":"b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e"} Jan 24 07:11:39 crc kubenswrapper[4675]: W0124 07:11:39.640056 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab6d6162_9f1a_409f_a1aa_87a14a15bf7f.slice/crio-0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736 WatchSource:0}: Error finding container 0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736: Status 404 returned error can't find the container with id 0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736 Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.652691 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-6lfkb" podStartSLOduration=4.652661248 podStartE2EDuration="4.652661248s" podCreationTimestamp="2026-01-24 07:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:39.639647255 +0000 UTC m=+1100.935752478" watchObservedRunningTime="2026-01-24 07:11:39.652661248 +0000 UTC m=+1100.948766471" Jan 24 07:11:39 crc kubenswrapper[4675]: I0124 07:11:39.674083 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-5zwrb" podStartSLOduration=3.674058783 podStartE2EDuration="3.674058783s" podCreationTimestamp="2026-01-24 07:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:39.656508741 +0000 UTC m=+1100.952613964" watchObservedRunningTime="2026-01-24 07:11:39.674058783 +0000 UTC m=+1100.970164006" Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.641989 4675 generic.go:334] "Generic (PLEG): container finished" podID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerID="03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.642678 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerDied","Data":"03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.649527 4675 generic.go:334] "Generic (PLEG): container finished" podID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerID="6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.649609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerDied","Data":"6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.652006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"4127ff24f622d04e4b50ebcd517a9334968f2e41d1a01ae243cb11e8f67f78d4"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.652037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"a8ef207c46b04e2a928306f9a3920e47acdf3c708baed032d4da30c34a4fd7d7"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653834 4675 generic.go:334] "Generic (PLEG): container finished" podID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerID="face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653957 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerDied","Data":"face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.653984 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerStarted","Data":"0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660643 4675 generic.go:334] "Generic (PLEG): container finished" podID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerID="c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660761 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerDied","Data":"c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.660791 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerStarted","Data":"a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.662690 4675 generic.go:334] "Generic (PLEG): container finished" podID="f802e166-b89b-4e38-9230-762edc86b32c" containerID="2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.662773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerDied","Data":"2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c"} Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.664382 4675 generic.go:334] "Generic (PLEG): container finished" podID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerID="bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0" exitCode=0 Jan 24 07:11:40 crc kubenswrapper[4675]: I0124 07:11:40.664439 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerDied","Data":"bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0"} Jan 24 07:11:44 crc kubenswrapper[4675]: E0124 07:11:44.285368 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.338354 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.357201 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.392827 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.422207 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436483 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") pod \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") pod \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436579 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") pod \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\" (UID: \"5e0a3027-2e26-4258-aaee-a5f0df76fe34\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.436782 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") pod \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\" (UID: \"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.437447 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e0a3027-2e26-4258-aaee-a5f0df76fe34" (UID: "5e0a3027-2e26-4258-aaee-a5f0df76fe34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.437855 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" (UID: "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.438075 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.440908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm" (OuterVolumeSpecName: "kube-api-access-89clm") pod "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" (UID: "8e546ec4-3ea8-4140-9238-8d5cdd09e4e9"). InnerVolumeSpecName "kube-api-access-89clm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.443519 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv" (OuterVolumeSpecName: "kube-api-access-gsvbv") pod "5e0a3027-2e26-4258-aaee-a5f0df76fe34" (UID: "5e0a3027-2e26-4258-aaee-a5f0df76fe34"). InnerVolumeSpecName "kube-api-access-gsvbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.454077 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542397 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") pod \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542568 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") pod \"f802e166-b89b-4e38-9230-762edc86b32c\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542600 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") pod \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\" (UID: \"ba347982-6836-4e3f-80c3-ef28ffc5e5cc\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542656 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") pod \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542682 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") pod \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542724 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") pod \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\" (UID: \"cce44ec9-1ffb-44d7-bcce-250a1fdf6959\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542772 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") pod \"f802e166-b89b-4e38-9230-762edc86b32c\" (UID: \"f802e166-b89b-4e38-9230-762edc86b32c\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.542872 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") pod \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\" (UID: \"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f\") " Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543209 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89clm\" (UniqueName: \"kubernetes.io/projected/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-kube-api-access-89clm\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543220 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543230 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e0a3027-2e26-4258-aaee-a5f0df76fe34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543239 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsvbv\" (UniqueName: \"kubernetes.io/projected/5e0a3027-2e26-4258-aaee-a5f0df76fe34-kube-api-access-gsvbv\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.543939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" (UID: "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546303 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cce44ec9-1ffb-44d7-bcce-250a1fdf6959" (UID: "cce44ec9-1ffb-44d7-bcce-250a1fdf6959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba347982-6836-4e3f-80c3-ef28ffc5e5cc" (UID: "ba347982-6836-4e3f-80c3-ef28ffc5e5cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.546555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f802e166-b89b-4e38-9230-762edc86b32c" (UID: "f802e166-b89b-4e38-9230-762edc86b32c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.547915 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf" (OuterVolumeSpecName: "kube-api-access-c99pf") pod "ba347982-6836-4e3f-80c3-ef28ffc5e5cc" (UID: "ba347982-6836-4e3f-80c3-ef28ffc5e5cc"). InnerVolumeSpecName "kube-api-access-c99pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.550158 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s" (OuterVolumeSpecName: "kube-api-access-29n5s") pod "f802e166-b89b-4e38-9230-762edc86b32c" (UID: "f802e166-b89b-4e38-9230-762edc86b32c"). InnerVolumeSpecName "kube-api-access-29n5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.550839 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg" (OuterVolumeSpecName: "kube-api-access-rqdkg") pod "cce44ec9-1ffb-44d7-bcce-250a1fdf6959" (UID: "cce44ec9-1ffb-44d7-bcce-250a1fdf6959"). InnerVolumeSpecName "kube-api-access-rqdkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.551524 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q" (OuterVolumeSpecName: "kube-api-access-gjq6q") pod "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" (UID: "ab6d6162-9f1a-409f-a1aa-87a14a15bf7f"). InnerVolumeSpecName "kube-api-access-gjq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645122 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f802e166-b89b-4e38-9230-762edc86b32c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645155 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645165 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjq6q\" (UniqueName: \"kubernetes.io/projected/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-kube-api-access-gjq6q\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645178 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645192 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqdkg\" (UniqueName: \"kubernetes.io/projected/cce44ec9-1ffb-44d7-bcce-250a1fdf6959-kube-api-access-rqdkg\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645201 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29n5s\" (UniqueName: \"kubernetes.io/projected/f802e166-b89b-4e38-9230-762edc86b32c-kube-api-access-29n5s\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645209 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.645219 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c99pf\" (UniqueName: \"kubernetes.io/projected/ba347982-6836-4e3f-80c3-ef28ffc5e5cc-kube-api-access-c99pf\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.710562 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"204d783f8319845444798286b072a4f66b8ac0266261a709e1f6e920cd514933"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.710604 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"56ba9ddfd5672aac91cc5eac3ce1a99e58bc3e1c3f1089afed4ffbc33bc9ffa6"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-391e-account-create-update-r55gs" event={"ID":"ab6d6162-9f1a-409f-a1aa-87a14a15bf7f","Type":"ContainerDied","Data":"0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711688 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8ab46931e83d3eb103016d6cec4e2c139bcceae4b9b4b0fc1d86cd7589c736" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.711763 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-391e-account-create-update-r55gs" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715413 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bbqrz" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715830 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bbqrz" event={"ID":"5e0a3027-2e26-4258-aaee-a5f0df76fe34","Type":"ContainerDied","Data":"a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.715885 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a410f9cc0f282aeac5c407ee365c1406247090a0b1ca3f5d68c0ef0fc41a5d8f" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718793 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ffb8-account-create-update-2lngf" event={"ID":"f802e166-b89b-4e38-9230-762edc86b32c","Type":"ContainerDied","Data":"6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718832 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e049d65046fc8bd2d246cb907d0d0be4b249ca0224a8649a7d6f2866dfa2350" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.718881 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ffb8-account-create-update-2lngf" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.722468 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerStarted","Data":"1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726152 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6lfkb" event={"ID":"cce44ec9-1ffb-44d7-bcce-250a1fdf6959","Type":"ContainerDied","Data":"e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726188 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e778afde4a601de36f6f4d3877893264b0f3b9192d53f14716fa1a1c8bbb75f7" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.726598 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6lfkb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747492 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5zwrb" event={"ID":"8e546ec4-3ea8-4140-9238-8d5cdd09e4e9","Type":"ContainerDied","Data":"b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747530 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8088589a7b47ccaff1366db2f315b596852c98dffbcff028ab603c645ee271e" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.747606 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5zwrb" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756641 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4de6-account-create-update-vzw5r" event={"ID":"ba347982-6836-4e3f-80c3-ef28ffc5e5cc","Type":"ContainerDied","Data":"30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49"} Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756678 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fa7852dd2cd0c46b6287d9bd9331a1f3b6baa20f6a90a77efc7b725b86fc49" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.756741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4de6-account-create-update-vzw5r" Jan 24 07:11:45 crc kubenswrapper[4675]: I0124 07:11:45.761525 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ttgww" podStartSLOduration=3.919172845 podStartE2EDuration="9.761504834s" podCreationTimestamp="2026-01-24 07:11:36 +0000 UTC" firstStartedPulling="2026-01-24 07:11:39.313627259 +0000 UTC m=+1100.609732482" lastFinishedPulling="2026-01-24 07:11:45.155959248 +0000 UTC m=+1106.452064471" observedRunningTime="2026-01-24 07:11:45.756320759 +0000 UTC m=+1107.052425982" watchObservedRunningTime="2026-01-24 07:11:45.761504834 +0000 UTC m=+1107.057610057" Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.764024 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerStarted","Data":"f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.769057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"075d9b1bd932aaed6d2af5cd675c905700a22594707c0714a8050c0c213eb0b4"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.769117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"4956529c8ae5702877fc98004b8fb7c945d4c35d72fe1221fa277be750eddbe3"} Jan 24 07:11:46 crc kubenswrapper[4675]: I0124 07:11:46.790738 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-95xkb" podStartSLOduration=2.90086547 podStartE2EDuration="33.790703267s" podCreationTimestamp="2026-01-24 07:11:13 +0000 UTC" firstStartedPulling="2026-01-24 07:11:14.611897491 +0000 UTC m=+1075.908002714" lastFinishedPulling="2026-01-24 07:11:45.501735288 +0000 UTC m=+1106.797840511" observedRunningTime="2026-01-24 07:11:46.783087243 +0000 UTC m=+1108.079192476" watchObservedRunningTime="2026-01-24 07:11:46.790703267 +0000 UTC m=+1108.086808500" Jan 24 07:11:48 crc kubenswrapper[4675]: I0124 07:11:48.792507 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"809153b31ad5c24469cfc83a2aee0404877249b17bc1fb90cd7fb5879cead740"} Jan 24 07:11:48 crc kubenswrapper[4675]: I0124 07:11:48.793452 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"efc12e4d15de27c184e04f72877609d9a106c023223187f2182842c8a9d1f7cc"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807115 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"0f6da2a4e817a348c9f135758a18b9cec7894c8aea285f51eff95bfef121fd55"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807582 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"b944801ea62a5cffc238e63a890a526a10226ed7eec176ad51adf44037d115fb"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"1ead6c386fb66aa4edcfdf05a198cfb2bf5a347be6cf3a28e90ac538a560303d"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"6b355ae3f48a06f8ca33b1dd301f098b42c925f43ddc11eef607461cb1d33fd6"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.807611 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cf53054f-7616-43d6-9aeb-eb5f880b6e40","Type":"ContainerStarted","Data":"8366e0c49d57fd2d3ad9edb8b10fa14b6769efd8be1fec3acc0b0d7dcd1cc80f"} Jan 24 07:11:49 crc kubenswrapper[4675]: I0124 07:11:49.848175 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.814040431 podStartE2EDuration="49.848159461s" podCreationTimestamp="2026-01-24 07:11:00 +0000 UTC" firstStartedPulling="2026-01-24 07:11:34.297955904 +0000 UTC m=+1095.594061137" lastFinishedPulling="2026-01-24 07:11:48.332074944 +0000 UTC m=+1109.628180167" observedRunningTime="2026-01-24 07:11:49.841638533 +0000 UTC m=+1111.137743766" watchObservedRunningTime="2026-01-24 07:11:49.848159461 +0000 UTC m=+1111.144264684" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197159 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197455 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197471 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197482 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197488 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197500 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197506 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197520 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197526 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197543 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197549 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: E0124 07:11:50.197563 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197569 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197699 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f802e166-b89b-4e38-9230-762edc86b32c" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197712 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197737 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197748 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197758 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" containerName="mariadb-database-create" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.197767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" containerName="mariadb-account-create-update" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.198475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.200589 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.222359 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.358754 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460153 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460207 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460229 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.460282 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462614 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462695 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.462998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.463469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.491177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"dnsmasq-dns-5c79d794d7-njlcw\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.512083 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:50 crc kubenswrapper[4675]: I0124 07:11:50.954597 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.824830 4675 generic.go:334] "Generic (PLEG): container finished" podID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerID="1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9" exitCode=0 Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.824894 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerDied","Data":"1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9"} Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827337 4675 generic.go:334] "Generic (PLEG): container finished" podID="6391092e-27b4-4604-8a19-416c5073b6bf" containerID="495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f" exitCode=0 Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827381 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f"} Jan 24 07:11:51 crc kubenswrapper[4675]: I0124 07:11:51.827425 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerStarted","Data":"0ab1f402eb6037ac10eccabab78ec60fe868a1b4dcab74a8478319dd857695e3"} Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.835664 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerStarted","Data":"0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1"} Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.837392 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:52 crc kubenswrapper[4675]: I0124 07:11:52.858228 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" podStartSLOduration=2.85820544 podStartE2EDuration="2.85820544s" podCreationTimestamp="2026-01-24 07:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:52.851350855 +0000 UTC m=+1114.147456068" watchObservedRunningTime="2026-01-24 07:11:52.85820544 +0000 UTC m=+1114.154310673" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.146252 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310698 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.310834 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") pod \"c949a736-b46d-4907-a24d-17f28f4e3f71\" (UID: \"c949a736-b46d-4907-a24d-17f28f4e3f71\") " Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.317002 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc" (OuterVolumeSpecName: "kube-api-access-mv7lc") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "kube-api-access-mv7lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.346896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data" (OuterVolumeSpecName: "config-data") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.348139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c949a736-b46d-4907-a24d-17f28f4e3f71" (UID: "c949a736-b46d-4907-a24d-17f28f4e3f71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412518 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv7lc\" (UniqueName: \"kubernetes.io/projected/c949a736-b46d-4907-a24d-17f28f4e3f71-kube-api-access-mv7lc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412558 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.412567 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c949a736-b46d-4907-a24d-17f28f4e3f71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848030 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ttgww" Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848653 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ttgww" event={"ID":"c949a736-b46d-4907-a24d-17f28f4e3f71","Type":"ContainerDied","Data":"243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9"} Jan 24 07:11:53 crc kubenswrapper[4675]: I0124 07:11:53.848688 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243cc2bd6dd3a09154104a9083678353839a94d8467ae1e1de86d7b6bc695da9" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.148267 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:54 crc kubenswrapper[4675]: E0124 07:11:54.148885 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.148903 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.149231 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" containerName="keystone-db-sync" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.149942 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164151 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164366 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.164391 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.165026 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.183608 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.194480 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.220706 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.226980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227149 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.227176 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.247940 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.249494 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.307765 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340576 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340621 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340667 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340707 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340770 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340881 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.340941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.363878 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.373399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.376538 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.383556 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.404278 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.425909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"keystone-bootstrap-xr9fm\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446561 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446633 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446676 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446752 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.446774 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.447518 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448029 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448555 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.448830 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.449348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.460571 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.461967 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.467639 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.476850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.477060 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2l2l" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.477269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.535540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"dnsmasq-dns-5b868669f-z9bxc\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.547645 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.547680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.548190 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.553816 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.592999 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.594437 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.601876 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.602077 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.602246 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.615204 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.640793 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tvkgt" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.653838 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.654062 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.654163 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.705685 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.706627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.725587 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"neutron-db-sync-4hsxg\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.744312 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.746212 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.748535 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.748859 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.752206 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.754976 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-mvflk" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756138 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756250 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756277 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.756321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.784954 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.786077 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798214 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gdfs9" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798419 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.798534 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.801914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.812198 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.822201 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.838257 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.838435 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.843866 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.855790 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858604 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858642 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858664 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858732 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858749 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858774 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858807 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858873 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858892 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858935 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.858980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.862040 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.871696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.880916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.895108 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.902211 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.959852 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960137 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960311 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960391 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960459 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960547 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960746 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960815 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.960976 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961103 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961264 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961481 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.961602 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.963319 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.993036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994108 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994183 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.994458 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.995780 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:54 crc kubenswrapper[4675]: I0124 07:11:54.996214 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"placement-db-sync-fp9qw\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:54.996930 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.002898 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.003911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.004216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:11:55 crc kubenswrapper[4675]: E0124 07:11:55.023684 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache]" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.028822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.037598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.086177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"horizon-d69f445c7-kqzw8\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.087066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"cinder-db-sync-58bxq\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093387 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093434 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093574 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093624 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093657 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093895 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093952 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.093991 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.094025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.094968 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.099232 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.104057 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.113421 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.131146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.163828 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.168609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.178086 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.183175 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.203918 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204008 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204147 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.204947 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.205077 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215264 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215585 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.215913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.217174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.232705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.239590 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"ceilometer-0\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.245463 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.251407 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.284806 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"dnsmasq-dns-cf78879c9-td45s\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312828 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.312970 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.313018 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.320829 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.321920 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.329406 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.330072 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9pmfh" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.334823 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.355165 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414739 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414823 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.414866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417681 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417734 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417756 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.417777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419318 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.419466 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.423539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.462463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"horizon-56ff9c89dc-jttpz\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.509202 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519741 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519788 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.519811 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.534508 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.547224 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.551251 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"barbican-db-sync-g8f6m\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.632296 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.647324 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.800331 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.818532 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:55 crc kubenswrapper[4675]: W0124 07:11:55.861393 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod871f5758_f078_4271_acb9_e5ca8bfdc2eb.slice/crio-f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e WatchSource:0}: Error finding container f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e: Status 404 returned error can't find the container with id f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.937850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerStarted","Data":"2a00eab4c015c2a71c3302cf901cc256a67db21e51a0bdeb53f6a384b0ab080c"} Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.950493 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" containerID="cri-o://0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" gracePeriod=10 Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.950603 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerStarted","Data":"f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e"} Jan 24 07:11:55 crc kubenswrapper[4675]: I0124 07:11:55.973734 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:11:55 crc kubenswrapper[4675]: W0124 07:11:55.998651 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7854ef09_5060_4534_96e2_2963cddcc691.slice/crio-20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718 WatchSource:0}: Error finding container 20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718: Status 404 returned error can't find the container with id 20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718 Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.319908 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.406870 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:11:56 crc kubenswrapper[4675]: W0124 07:11:56.429474 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d590a0d_6c41_407a_8e89_3e7b9a64a3f7.slice/crio-c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc WatchSource:0}: Error finding container c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc: Status 404 returned error can't find the container with id c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.434549 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.440255 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.555548 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.587231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:11:56 crc kubenswrapper[4675]: W0124 07:11:56.606999 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb93eadf_9c52_436f_8dcc_16a7ad976254.slice/crio-9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98 WatchSource:0}: Error finding container 9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98: Status 404 returned error can't find the container with id 9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98 Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.607141 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.933485 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.959410 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.987401 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d69f445c7-kqzw8" event={"ID":"c4464d27-9360-4f78-92cd-3b9d11204ec2","Type":"ContainerStarted","Data":"6300eddfd5812e2ef5a13cb0e83a7dac291f0af984180d712cb6ee55436346f3"} Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.992435 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:56 crc kubenswrapper[4675]: I0124 07:11:56.994374 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.012075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerStarted","Data":"fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.024355 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerStarted","Data":"c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.025856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ff9c89dc-jttpz" event={"ID":"cb93eadf-9c52-436f-8dcc-16a7ad976254","Type":"ContainerStarted","Data":"9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.030054 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.030615 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"9da5085f19a402a7076d2b62d3720d8bab0822f44dc0a613a31fb3c57b813329"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.040324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerStarted","Data":"112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.043572 4675 generic.go:334] "Generic (PLEG): container finished" podID="6391092e-27b4-4604-8a19-416c5073b6bf" containerID="0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" exitCode=0 Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.043676 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.056866 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094146 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094233 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094291 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.094326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.098919 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerStarted","Data":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.099094 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" containerID="cri-o://30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" gracePeriod=10 Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.125848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.132177 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4hsxg" podStartSLOduration=3.132154694 podStartE2EDuration="3.132154694s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:57.097400125 +0000 UTC m=+1118.393505348" watchObservedRunningTime="2026-01-24 07:11:57.132154694 +0000 UTC m=+1118.428259917" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.141346 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerStarted","Data":"5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerStarted","Data":"6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222804 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerStarted","Data":"20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718"} Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222836 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222880 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222912 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222937 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.222957 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.225371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.225603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.226039 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.244866 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.255220 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xr9fm" podStartSLOduration=3.255206062 podStartE2EDuration="3.255206062s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:57.253138891 +0000 UTC m=+1118.549244114" watchObservedRunningTime="2026-01-24 07:11:57.255206062 +0000 UTC m=+1118.551311285" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.258391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"horizon-66b6dd9b6f-mms9h\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.319624 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.459549 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534151 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534290 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534398 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534432 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.534529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") pod \"6391092e-27b4-4604-8a19-416c5073b6bf\" (UID: \"6391092e-27b4-4604-8a19-416c5073b6bf\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.587445 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz" (OuterVolumeSpecName: "kube-api-access-vtjqz") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "kube-api-access-vtjqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.615531 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.655929 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657658 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtjqz\" (UniqueName: \"kubernetes.io/projected/6391092e-27b4-4604-8a19-416c5073b6bf-kube-api-access-vtjqz\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657703 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.657760 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.712989 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config" (OuterVolumeSpecName: "config") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.735485 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.753150 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.762064 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.762109 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.787492 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6391092e-27b4-4604-8a19-416c5073b6bf" (UID: "6391092e-27b4-4604-8a19-416c5073b6bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863438 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863492 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863530 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863564 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.863809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") pod \"6602e4dc-7422-48ec-9a0f-919faff36b4e\" (UID: \"6602e4dc-7422-48ec-9a0f-919faff36b4e\") " Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.864180 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6391092e-27b4-4604-8a19-416c5073b6bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.880928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l" (OuterVolumeSpecName: "kube-api-access-zqh4l") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "kube-api-access-zqh4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.900286 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config" (OuterVolumeSpecName: "config") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.901245 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.902134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.921020 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.921046 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6602e4dc-7422-48ec-9a0f-919faff36b4e" (UID: "6602e4dc-7422-48ec-9a0f-919faff36b4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966085 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966118 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966128 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966137 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966145 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6602e4dc-7422-48ec-9a0f-919faff36b4e-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:57 crc kubenswrapper[4675]: I0124 07:11:57.966153 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqh4l\" (UniqueName: \"kubernetes.io/projected/6602e4dc-7422-48ec-9a0f-919faff36b4e-kube-api-access-zqh4l\") on node \"crc\" DevicePath \"\"" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.158057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.246030 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerID="24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e" exitCode=0 Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.246102 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292380 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" event={"ID":"6391092e-27b4-4604-8a19-416c5073b6bf","Type":"ContainerDied","Data":"0ab1f402eb6037ac10eccabab78ec60fe868a1b4dcab74a8478319dd857695e3"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292441 4675 scope.go:117] "RemoveContainer" containerID="0b3c07f23548cc340802bda7f8f284bbe8cb1d557506edd0e2bb72db39db55f1" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.292598 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-njlcw" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319186 4675 generic.go:334] "Generic (PLEG): container finished" podID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" exitCode=0 Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerDied","Data":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" event={"ID":"6602e4dc-7422-48ec-9a0f-919faff36b4e","Type":"ContainerDied","Data":"2a00eab4c015c2a71c3302cf901cc256a67db21e51a0bdeb53f6a384b0ab080c"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.319318 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-z9bxc" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.337231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b6dd9b6f-mms9h" event={"ID":"f7a6babb-0cb5-4967-9e60-749d73be754b","Type":"ContainerStarted","Data":"3dbb003cca25be0a35bc048d9a61e606b4e3d23ac2e4ee99addd88a24699871f"} Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.391001 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.413875 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-njlcw"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.487145 4675 scope.go:117] "RemoveContainer" containerID="495b578ee6202ae9668863232052b610d816ffa68e81d10913cfb8812139ec2f" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.491602 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.530969 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-z9bxc"] Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.573934 4675 scope.go:117] "RemoveContainer" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.675776 4675 scope.go:117] "RemoveContainer" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: E0124 07:11:58.676395 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": container with ID starting with 30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95 not found: ID does not exist" containerID="30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.676440 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95"} err="failed to get container status \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": rpc error: code = NotFound desc = could not find container \"30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95\": container with ID starting with 30ac86154fbab9272dd9c81b4ec3feb52a8f51156703bfb83e49f316c657ca95 not found: ID does not exist" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.959415 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" path="/var/lib/kubelet/pods/6391092e-27b4-4604-8a19-416c5073b6bf/volumes" Jan 24 07:11:58 crc kubenswrapper[4675]: I0124 07:11:58.960617 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" path="/var/lib/kubelet/pods/6602e4dc-7422-48ec-9a0f-919faff36b4e/volumes" Jan 24 07:11:59 crc kubenswrapper[4675]: I0124 07:11:59.347451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerStarted","Data":"6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1"} Jan 24 07:11:59 crc kubenswrapper[4675]: I0124 07:11:59.348477 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.423325 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e53c5a1-6293-46d9-9783-e7d183050152" containerID="f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67" exitCode=0 Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.423413 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerDied","Data":"f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67"} Jan 24 07:12:02 crc kubenswrapper[4675]: I0124 07:12:02.449988 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podStartSLOduration=8.449966934 podStartE2EDuration="8.449966934s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:11:59.369921956 +0000 UTC m=+1120.666027179" watchObservedRunningTime="2026-01-24 07:12:02.449966934 +0000 UTC m=+1123.746072157" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.784520 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.810458 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812157 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812172 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812203 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812209 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: E0124 07:12:03.812222 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812227 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812376 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6602e4dc-7422-48ec-9a0f-919faff36b4e" containerName="init" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.812398 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6391092e-27b4-4604-8a19-416c5073b6bf" containerName="dnsmasq-dns" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.813258 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.815853 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.832085 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.924401 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938066 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938124 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938153 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938177 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.938413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.971099 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:03.972472 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.022237 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048829 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048855 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048878 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048949 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.048985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.050600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.051195 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.054914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.060702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.076234 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.080534 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.093670 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"horizon-6565db7666-dt2lk\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.147197 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151814 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151837 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151878 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151906 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.151973 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253564 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253609 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253649 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.253698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.254434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7e7730-0a42-48b0-bb7e-da95eb915126-logs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.255594 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-config-data\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.257393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b7e7730-0a42-48b0-bb7e-da95eb915126-scripts\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.262496 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-secret-key\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.267904 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-combined-ca-bundle\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.268273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b7e7730-0a42-48b0-bb7e-da95eb915126-horizon-tls-certs\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.288572 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jhc\" (UniqueName: \"kubernetes.io/projected/4b7e7730-0a42-48b0-bb7e-da95eb915126-kube-api-access-94jhc\") pod \"horizon-656ff794dd-jx8ld\" (UID: \"4b7e7730-0a42-48b0-bb7e-da95eb915126\") " pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.310223 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.450356 4675 generic.go:334] "Generic (PLEG): container finished" podID="7854ef09-5060-4534-96e2-2963cddcc691" containerID="6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6" exitCode=0 Jan 24 07:12:04 crc kubenswrapper[4675]: I0124 07:12:04.450406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerDied","Data":"6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6"} Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.357980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:05 crc kubenswrapper[4675]: E0124 07:12:05.395494 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.441214 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.441539 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" containerID="cri-o://c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" gracePeriod=10 Jan 24 07:12:05 crc kubenswrapper[4675]: I0124 07:12:05.641348 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:06 crc kubenswrapper[4675]: I0124 07:12:06.487197 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1e65888-5032-411e-8910-5438e0aff32f" containerID="c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" exitCode=0 Jan 24 07:12:06 crc kubenswrapper[4675]: I0124 07:12:06.487401 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41"} Jan 24 07:12:08 crc kubenswrapper[4675]: I0124 07:12:08.630445 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:12:08 crc kubenswrapper[4675]: I0124 07:12:08.630797 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.438363 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.498375 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.499283 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.500009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.500094 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") pod \"7e53c5a1-6293-46d9-9783-e7d183050152\" (UID: \"7e53c5a1-6293-46d9-9783-e7d183050152\") " Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.504121 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.506964 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt" (OuterVolumeSpecName: "kube-api-access-f6jbt") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "kube-api-access-f6jbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531137 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-95xkb" event={"ID":"7e53c5a1-6293-46d9-9783-e7d183050152","Type":"ContainerDied","Data":"aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee"} Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531186 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba894555e46ec22e81cf8b996b4a30e472efbd5119d3ffa6f69ee65a8d156ee" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.531260 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-95xkb" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.546799 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.604662 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data" (OuterVolumeSpecName: "config-data") pod "7e53c5a1-6293-46d9-9783-e7d183050152" (UID: "7e53c5a1-6293-46d9-9783-e7d183050152"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.605026 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.606627 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.606639 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jbt\" (UniqueName: \"kubernetes.io/projected/7e53c5a1-6293-46d9-9783-e7d183050152-kube-api-access-f6jbt\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.642155 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:10 crc kubenswrapper[4675]: I0124 07:12:10.709341 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53c5a1-6293-46d9-9783-e7d183050152-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.894338 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:11 crc kubenswrapper[4675]: E0124 07:12:11.896765 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.896795 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.897004 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" containerName="glance-db-sync" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.897875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:11 crc kubenswrapper[4675]: I0124 07:12:11.927788 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.040926 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041037 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.041275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142381 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142456 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142532 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.142594 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.143380 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.143525 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.144202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.144206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.147405 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.189841 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"dnsmasq-dns-56df8fb6b7-c2j5t\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.229178 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.975356 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.976774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.979943 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.980156 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:12:12 crc kubenswrapper[4675]: I0124 07:12:12.980222 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wwtw8" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:12.999377 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.061793 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.063943 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.066230 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.098390 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099599 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099655 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099740 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.099950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.100079 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.100134 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202182 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202261 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202277 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202293 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202374 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202402 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202428 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202451 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202468 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202507 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202536 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202567 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.202837 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.205687 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.205805 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.210667 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.218900 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.219099 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.230393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.237539 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304087 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304145 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304169 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304654 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304686 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.304970 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.305060 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.305381 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.310344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.311072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.317977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.326454 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.373913 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:13 crc kubenswrapper[4675]: I0124 07:12:13.378148 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:14 crc kubenswrapper[4675]: I0124 07:12:14.759369 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:14 crc kubenswrapper[4675]: I0124 07:12:14.840513 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:15 crc kubenswrapper[4675]: I0124 07:12:15.640706 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 24 07:12:15 crc kubenswrapper[4675]: I0124 07:12:15.641519 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:15 crc kubenswrapper[4675]: E0124 07:12:15.645000 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice/crio-1a0f7f9298b39f89dc7e44159744c590495167a2324265fbbf43626cef45d3eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42600dc3_a4f8_45bf_9bdf_4fb8a52f6aad.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.761841 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.762366 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h645hbfh8ch8h97h656h56dh675h545h8bh77hbbh6h546h9ch55fh5f5h5dfh5ch594h6ch65h64bhch5fbh5b6h5c5h8fh54h5f7hfcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd6wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56ff9c89dc-jttpz_openstack(cb93eadf-9c52-436f-8dcc-16a7ad976254): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.787195 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56ff9c89dc-jttpz" podUID="cb93eadf-9c52-436f-8dcc-16a7ad976254" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.788325 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.788459 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4hbfh5cfh666h547h667h57chddh84h7dh8fh546h99h5fch55bhd4h549h687hd9h666h59chfdhbdh5bdh9fh58hb9h57dh5b7h67dhc8h576q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps82j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-d69f445c7-kqzw8_openstack(c4464d27-9360-4f78-92cd-3b9d11204ec2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.797459 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-d69f445c7-kqzw8" podUID="c4464d27-9360-4f78-92cd-3b9d11204ec2" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.812217 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.812401 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h5bfh699h575hcbh68fhfdh5bfh5cfh9chbh66fh658hf4hf5h88h67h65fh54bhc4h645hd9h546h68dh8fh5d9h85h89h6fh587h548h5fcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkt2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66b6dd9b6f-mms9h_openstack(f7a6babb-0cb5-4967-9e60-749d73be754b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:16 crc kubenswrapper[4675]: E0124 07:12:16.817605 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66b6dd9b6f-mms9h" podUID="f7a6babb-0cb5-4967-9e60-749d73be754b" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.853980 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.990858 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.991346 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.992092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.992179 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") pod \"7854ef09-5060-4534-96e2-2963cddcc691\" (UID: \"7854ef09-5060-4534-96e2-2963cddcc691\") " Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.996869 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:16 crc kubenswrapper[4675]: I0124 07:12:16.997035 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.005882 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts" (OuterVolumeSpecName: "scripts") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.023967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97" (OuterVolumeSpecName: "kube-api-access-bms97") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "kube-api-access-bms97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.032280 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data" (OuterVolumeSpecName: "config-data") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.032825 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7854ef09-5060-4534-96e2-2963cddcc691" (UID: "7854ef09-5060-4534-96e2-2963cddcc691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095937 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bms97\" (UniqueName: \"kubernetes.io/projected/7854ef09-5060-4534-96e2-2963cddcc691-kube-api-access-bms97\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095983 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.095997 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096010 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096022 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.096030 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7854ef09-5060-4534-96e2-2963cddcc691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.606521 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xr9fm" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.609093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xr9fm" event={"ID":"7854ef09-5060-4534-96e2-2963cddcc691","Type":"ContainerDied","Data":"20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718"} Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.609145 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20677f478245dfa1a37b92490323e7eeca93b47bd251fa0d61cdbbcd49c38718" Jan 24 07:12:17 crc kubenswrapper[4675]: I0124 07:12:17.974996 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.032094 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xr9fm"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.063029 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:18 crc kubenswrapper[4675]: E0124 07:12:18.064447 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.064509 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.064960 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7854ef09-5060-4534-96e2-2963cddcc691" containerName="keystone-bootstrap" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.065653 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.076050 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.080949 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081154 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081280 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.081780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.102311 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233463 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233516 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233604 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233672 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233761 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.233851 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336156 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336191 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336250 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336335 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.336492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.342000 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.342205 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.344250 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.345544 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.352231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.356277 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"keystone-bootstrap-v7kb4\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.404685 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:18 crc kubenswrapper[4675]: I0124 07:12:18.963090 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7854ef09-5060-4534-96e2-2963cddcc691" path="/var/lib/kubelet/pods/7854ef09-5060-4534-96e2-2963cddcc691/volumes" Jan 24 07:12:21 crc kubenswrapper[4675]: I0124 07:12:21.644254 4675 generic.go:334] "Generic (PLEG): container finished" podID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerID="fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902" exitCode=0 Jan 24 07:12:21 crc kubenswrapper[4675]: I0124 07:12:21.644434 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerDied","Data":"fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902"} Jan 24 07:12:25 crc kubenswrapper[4675]: I0124 07:12:25.641356 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:30 crc kubenswrapper[4675]: I0124 07:12:30.642579 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:31 crc kubenswrapper[4675]: E0124 07:12:31.717958 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 24 07:12:31 crc kubenswrapper[4675]: E0124 07:12:31.718124 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h67dhfch57bh667h676h89h5d4h5b7h5b8h8fhc8h5bbhfh54dh66chcdh695h98hc4h579h8fhfch586h59dh5d4h64ch5ch595h9bhc7h94q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pb8hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(62b7e06f-b840-408c-b026-a086b975812f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.152605 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.157586 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txvlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g8f6m_openstack(57270c73-9e5a-4629-8c7a-85123438a067): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.159064 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g8f6m" podUID="57270c73-9e5a-4629-8c7a-85123438a067" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.336147 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.342091 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.347699 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.356132 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.366412 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.458957 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459001 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459095 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459143 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459194 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459243 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459324 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459344 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459395 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459440 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459463 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459481 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") pod \"c4464d27-9360-4f78-92cd-3b9d11204ec2\" (UID: \"c4464d27-9360-4f78-92cd-3b9d11204ec2\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") pod \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\" (UID: \"871f5758-f078-4271-acb9-e5ca8bfdc2eb\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459527 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") pod \"cb93eadf-9c52-436f-8dcc-16a7ad976254\" (UID: \"cb93eadf-9c52-436f-8dcc-16a7ad976254\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459586 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") pod \"f7a6babb-0cb5-4967-9e60-749d73be754b\" (UID: \"f7a6babb-0cb5-4967-9e60-749d73be754b\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.459610 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") pod \"b1e65888-5032-411e-8910-5438e0aff32f\" (UID: \"b1e65888-5032-411e-8910-5438e0aff32f\") " Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs" (OuterVolumeSpecName: "logs") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs" (OuterVolumeSpecName: "logs") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461455 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data" (OuterVolumeSpecName: "config-data") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461555 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts" (OuterVolumeSpecName: "scripts") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.461654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs" (OuterVolumeSpecName: "logs") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.462595 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data" (OuterVolumeSpecName: "config-data") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.462705 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data" (OuterVolumeSpecName: "config-data") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.466497 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts" (OuterVolumeSpecName: "scripts") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.469199 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts" (OuterVolumeSpecName: "scripts") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.480956 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf" (OuterVolumeSpecName: "kube-api-access-gd6wf") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "kube-api-access-gd6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.481192 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6" (OuterVolumeSpecName: "kube-api-access-xd8c6") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "kube-api-access-xd8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.487036 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j" (OuterVolumeSpecName: "kube-api-access-ps82j") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "kube-api-access-ps82j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.489162 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.489288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cb93eadf-9c52-436f-8dcc-16a7ad976254" (UID: "cb93eadf-9c52-436f-8dcc-16a7ad976254"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.498331 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l" (OuterVolumeSpecName: "kube-api-access-xkt2l") pod "f7a6babb-0cb5-4967-9e60-749d73be754b" (UID: "f7a6babb-0cb5-4967-9e60-749d73be754b"). InnerVolumeSpecName "kube-api-access-xkt2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.512288 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c4464d27-9360-4f78-92cd-3b9d11204ec2" (UID: "c4464d27-9360-4f78-92cd-3b9d11204ec2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.530898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw" (OuterVolumeSpecName: "kube-api-access-gjfhw") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "kube-api-access-gjfhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.541803 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.548860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config" (OuterVolumeSpecName: "config") pod "871f5758-f078-4271-acb9-e5ca8bfdc2eb" (UID: "871f5758-f078-4271-acb9-e5ca8bfdc2eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.554760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561310 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkt2l\" (UniqueName: \"kubernetes.io/projected/f7a6babb-0cb5-4967-9e60-749d73be754b-kube-api-access-xkt2l\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561353 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93eadf-9c52-436f-8dcc-16a7ad976254-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561368 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjfhw\" (UniqueName: \"kubernetes.io/projected/b1e65888-5032-411e-8910-5438e0aff32f-kube-api-access-gjfhw\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561379 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561389 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4464d27-9360-4f78-92cd-3b9d11204ec2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561398 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4464d27-9360-4f78-92cd-3b9d11204ec2-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561407 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7a6babb-0cb5-4967-9e60-749d73be754b-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561416 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561424 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8c6\" (UniqueName: \"kubernetes.io/projected/871f5758-f078-4271-acb9-e5ca8bfdc2eb-kube-api-access-xd8c6\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561432 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561441 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f7a6babb-0cb5-4967-9e60-749d73be754b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561449 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561457 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4464d27-9360-4f78-92cd-3b9d11204ec2-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561465 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd6wf\" (UniqueName: \"kubernetes.io/projected/cb93eadf-9c52-436f-8dcc-16a7ad976254-kube-api-access-gd6wf\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561554 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561567 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a6babb-0cb5-4967-9e60-749d73be754b-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561576 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb93eadf-9c52-436f-8dcc-16a7ad976254-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561584 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb93eadf-9c52-436f-8dcc-16a7ad976254-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561593 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps82j\" (UniqueName: \"kubernetes.io/projected/c4464d27-9360-4f78-92cd-3b9d11204ec2-kube-api-access-ps82j\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.561601 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871f5758-f078-4271-acb9-e5ca8bfdc2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.596670 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config" (OuterVolumeSpecName: "config") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.609798 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.626307 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1e65888-5032-411e-8910-5438e0aff32f" (UID: "b1e65888-5032-411e-8910-5438e0aff32f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663828 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663896 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.663911 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1e65888-5032-411e-8910-5438e0aff32f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.749201 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56ff9c89dc-jttpz" event={"ID":"cb93eadf-9c52-436f-8dcc-16a7ad976254","Type":"ContainerDied","Data":"9c43ff8ad709d592e92a1a98ded72a28038dba8ee627cd90181542ae456c6e98"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.749259 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56ff9c89dc-jttpz" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.753529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b6dd9b6f-mms9h" event={"ID":"f7a6babb-0cb5-4967-9e60-749d73be754b","Type":"ContainerDied","Data":"3dbb003cca25be0a35bc048d9a61e606b4e3d23ac2e4ee99addd88a24699871f"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.753584 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b6dd9b6f-mms9h" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.756842 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" event={"ID":"b1e65888-5032-411e-8910-5438e0aff32f","Type":"ContainerDied","Data":"e4e440f949a16c7c92a1572ceb6020eb2c0abbdd347846f7e3ad225704016290"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.756887 4675 scope.go:117] "RemoveContainer" containerID="c6e71287ec7fd966046c5d90ff95c855b676a7ce9888a7f83191c7628a04df41" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.757136 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.765461 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d69f445c7-kqzw8" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.765960 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d69f445c7-kqzw8" event={"ID":"c4464d27-9360-4f78-92cd-3b9d11204ec2","Type":"ContainerDied","Data":"6300eddfd5812e2ef5a13cb0e83a7dac291f0af984180d712cb6ee55436346f3"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776392 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4hsxg" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776787 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4hsxg" event={"ID":"871f5758-f078-4271-acb9-e5ca8bfdc2eb","Type":"ContainerDied","Data":"f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e"} Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.776853 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84662ad3ec2c456c286e916ae9aa92e43378ed477168ce624ec721241a6bc5e" Jan 24 07:12:32 crc kubenswrapper[4675]: E0124 07:12:32.781267 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g8f6m" podUID="57270c73-9e5a-4629-8c7a-85123438a067" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.853366 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.873402 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56ff9c89dc-jttpz"] Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.977459 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb93eadf-9c52-436f-8dcc-16a7ad976254" path="/var/lib/kubelet/pods/cb93eadf-9c52-436f-8dcc-16a7ad976254/volumes" Jan 24 07:12:32 crc kubenswrapper[4675]: I0124 07:12:32.977894 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.022664 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66b6dd9b6f-mms9h"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.042710 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.052298 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mtp78"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.073903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.087078 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d69f445c7-kqzw8"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.614152 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.734264 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735067 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="init" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735087 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="init" Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735098 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: E0124 07:12:33.735123 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735129 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735365 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.735537 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" containerName="neutron-db-sync" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.736511 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.792827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799779 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799825 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.799904 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.895822 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.898122 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901040 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901098 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901118 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901162 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901227 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.901281 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902249 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902305 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.902852 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.903073 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.903356 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917436 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r2l2l" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917705 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917944 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.917947 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.929588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:33 crc kubenswrapper[4675]: I0124 07:12:33.947822 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"dnsmasq-dns-6b7b667979-zrk79\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.006980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007468 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.007713 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.064097 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109628 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109701 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109767 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109799 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.109869 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.123886 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.127529 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.139417 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.139490 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.150987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"neutron-67cddfd9dd-rbhzj\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.268875 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.804239 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.804799 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psx25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-58bxq_openstack(0d590a0d-6c41-407a-8e89-3e7b9a64a3f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.806057 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-58bxq" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" Jan 24 07:12:34 crc kubenswrapper[4675]: E0124 07:12:34.859877 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-58bxq" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.961658 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e65888-5032-411e-8910-5438e0aff32f" path="/var/lib/kubelet/pods/b1e65888-5032-411e-8910-5438e0aff32f/volumes" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.962441 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4464d27-9360-4f78-92cd-3b9d11204ec2" path="/var/lib/kubelet/pods/c4464d27-9360-4f78-92cd-3b9d11204ec2/volumes" Jan 24 07:12:34 crc kubenswrapper[4675]: I0124 07:12:34.962881 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a6babb-0cb5-4967-9e60-749d73be754b" path="/var/lib/kubelet/pods/f7a6babb-0cb5-4967-9e60-749d73be754b/volumes" Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.099042 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.399628 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-656ff794dd-jx8ld"] Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.525913 4675 scope.go:117] "RemoveContainer" containerID="ae8e22c487bc5bca69369f08e9cf6514b43a32b610d114fdbb4d48fac338177d" Jan 24 07:12:35 crc kubenswrapper[4675]: W0124 07:12:35.579289 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b7e7730_0a42_48b0_bb7e_da95eb915126.slice/crio-8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4 WatchSource:0}: Error finding container 8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4: Status 404 returned error can't find the container with id 8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4 Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.644147 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mtp78" podUID="b1e65888-5032-411e-8910-5438e0aff32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.873528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"8bd1c4beba04d5fcc9572ea915bae361afe798b8dbb1c73401d9df44778367a4"} Jan 24 07:12:35 crc kubenswrapper[4675]: I0124 07:12:35.876146 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"d950b238b60f1812543dfb4f7f5294f5560f40c993673b23b13c0d2609edbe30"} Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.193304 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.514601 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:12:36 crc kubenswrapper[4675]: W0124 07:12:36.554765 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5da3fd8e_4f1c_4a68_ae8d_ab0b06193e01.slice/crio-a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d WatchSource:0}: Error finding container a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d: Status 404 returned error can't find the container with id a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.615555 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.668705 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.673483 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.685564 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.685871 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.729564 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.789997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790064 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.790099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891697 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891791 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891829 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891878 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891895 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891922 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.891971 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.913152 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.918319 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.931344 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.931885 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.939531 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.948267 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.953316 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:36 crc kubenswrapper[4675]: I0124 07:12:36.979397 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"neutron-5cfd8b5875-msfrk\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028542 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerStarted","Data":"a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028576 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerStarted","Data":"ba3cb824f7658dc7273442d0f03b26f2e1b17aa2660c55aee97db099bc8849ca"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.028586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerStarted","Data":"d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.034359 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.040092 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48"} Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.055256 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fp9qw" podStartSLOduration=7.345451675 podStartE2EDuration="43.055237543s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.487457554 +0000 UTC m=+1117.783562767" lastFinishedPulling="2026-01-24 07:12:32.197243402 +0000 UTC m=+1153.493348635" observedRunningTime="2026-01-24 07:12:37.040316613 +0000 UTC m=+1158.336421836" watchObservedRunningTime="2026-01-24 07:12:37.055237543 +0000 UTC m=+1158.351342756" Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.119979 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.171973 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:12:37 crc kubenswrapper[4675]: I0124 07:12:37.941784 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067033 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerID="a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4" exitCode=0 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.067494 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerStarted","Data":"b1d2359f9bd1730fd38d06c61fbb2923f790bf4bfe4ea9760488e068602ba6b1"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.078194 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"7718fcd3b8f83feea63d09a395c7695283cd48482e3fdf877f211fe1be62a3b9"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.092855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"d623efa84153d9478067cecc083922766f30de0af338ee7da4123256d77162f1"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.110344 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.110407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerStarted","Data":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.151041 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.151087 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"9a865440f381a7417cf12468f043671da2b8b23ef036f738147c217bd9897103"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.157768 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerStarted","Data":"eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.157959 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6565db7666-dt2lk" podStartSLOduration=34.553713585 podStartE2EDuration="35.157930588s" podCreationTimestamp="2026-01-24 07:12:03 +0000 UTC" firstStartedPulling="2026-01-24 07:12:35.642683813 +0000 UTC m=+1156.938789036" lastFinishedPulling="2026-01-24 07:12:36.246900816 +0000 UTC m=+1157.543006039" observedRunningTime="2026-01-24 07:12:38.151500704 +0000 UTC m=+1159.447605937" watchObservedRunningTime="2026-01-24 07:12:38.157930588 +0000 UTC m=+1159.454035821" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.169890 4675 generic.go:334] "Generic (PLEG): container finished" podID="b4b87366-fdf2-4654-aab4-efa74076b162" containerID="c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3" exitCode=0 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.169971 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerDied","Data":"c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.214633 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"583775ff60e4fb2db235511c186b845c09489d7ccd9368474576c53217f77ef8"} Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.243788 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v7kb4" podStartSLOduration=20.243767488 podStartE2EDuration="20.243767488s" podCreationTimestamp="2026-01-24 07:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:38.201814666 +0000 UTC m=+1159.497919989" watchObservedRunningTime="2026-01-24 07:12:38.243767488 +0000 UTC m=+1159.539872711" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637148 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637508 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.637775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.638426 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.638483 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" gracePeriod=600 Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.808424 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896475 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896565 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896585 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896776 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.896795 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") pod \"b4b87366-fdf2-4654-aab4-efa74076b162\" (UID: \"b4b87366-fdf2-4654-aab4-efa74076b162\") " Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.931663 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr" (OuterVolumeSpecName: "kube-api-access-qhlvr") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "kube-api-access-qhlvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.949494 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:38 crc kubenswrapper[4675]: I0124 07:12:38.995198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999603 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999645 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlvr\" (UniqueName: \"kubernetes.io/projected/b4b87366-fdf2-4654-aab4-efa74076b162-kube-api-access-qhlvr\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:38.999657 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.006827 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.007198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.008920 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config" (OuterVolumeSpecName: "config") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.013379 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4b87366-fdf2-4654-aab4-efa74076b162" (UID: "b4b87366-fdf2-4654-aab4-efa74076b162"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.103239 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.104161 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.104236 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4b87366-fdf2-4654-aab4-efa74076b162-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.257441 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.283733 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerStarted","Data":"7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.284826 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.286211 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"5ce213dfa2f439c1f1ec2ddd0ebc6a5f1f2676cc28fbcde221469753153be07d"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.291483 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-656ff794dd-jx8ld" podStartSLOduration=35.16918923 podStartE2EDuration="36.291459198s" podCreationTimestamp="2026-01-24 07:12:03 +0000 UTC" firstStartedPulling="2026-01-24 07:12:35.693276853 +0000 UTC m=+1156.989382076" lastFinishedPulling="2026-01-24 07:12:36.815546831 +0000 UTC m=+1158.111652044" observedRunningTime="2026-01-24 07:12:39.279763566 +0000 UTC m=+1160.575868789" watchObservedRunningTime="2026-01-24 07:12:39.291459198 +0000 UTC m=+1160.587564421" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299034 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" event={"ID":"b4b87366-fdf2-4654-aab4-efa74076b162","Type":"ContainerDied","Data":"ba3cb824f7658dc7273442d0f03b26f2e1b17aa2660c55aee97db099bc8849ca"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299083 4675 scope.go:117] "RemoveContainer" containerID="c61bf5984b471f88cfdd02231ae03b8b91f374729b2fde1b61566e741f61d5d3" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.299202 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-c2j5t" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.306609 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.319393 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerStarted","Data":"13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.321287 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.332707 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67cddfd9dd-rbhzj" podStartSLOduration=6.332679762 podStartE2EDuration="6.332679762s" podCreationTimestamp="2026-01-24 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:39.325809716 +0000 UTC m=+1160.621914949" watchObservedRunningTime="2026-01-24 07:12:39.332679762 +0000 UTC m=+1160.628784985" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.341046 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" exitCode=0 Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.342583 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c"} Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.362122 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" podStartSLOduration=6.362102432 podStartE2EDuration="6.362102432s" podCreationTimestamp="2026-01-24 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:39.355034591 +0000 UTC m=+1160.651139814" watchObservedRunningTime="2026-01-24 07:12:39.362102432 +0000 UTC m=+1160.658207655" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.519315 4675 scope.go:117] "RemoveContainer" containerID="9ae90be563283b996d1b10bf3ad8715e03978ae7930422faef174e860a3bf62d" Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.519608 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:39 crc kubenswrapper[4675]: I0124 07:12:39.549017 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-c2j5t"] Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.453484 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484068 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerStarted","Data":"1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484393 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" containerID="cri-o://9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" gracePeriod=30 Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.484932 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" containerID="cri-o://1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" gracePeriod=30 Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.516050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.516030113 podStartE2EDuration="29.516030113s" podCreationTimestamp="2026-01-24 07:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:40.512177311 +0000 UTC m=+1161.808282534" watchObservedRunningTime="2026-01-24 07:12:40.516030113 +0000 UTC m=+1161.812135336" Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.530107 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.551309 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} Jan 24 07:12:40 crc kubenswrapper[4675]: I0124 07:12:40.966590 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" path="/var/lib/kubelet/pods/b4b87366-fdf2-4654-aab4-efa74076b162/volumes" Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561270 4675 generic.go:334] "Generic (PLEG): container finished" podID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerID="1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" exitCode=143 Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561573 4675 generic.go:334] "Generic (PLEG): container finished" podID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerID="9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" exitCode=143 Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401"} Jan 24 07:12:41 crc kubenswrapper[4675]: I0124 07:12:41.561623 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.583568 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerStarted","Data":"afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.583900 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" containerID="cri-o://4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" gracePeriod=30 Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.584158 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" containerID="cri-o://afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" gracePeriod=30 Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.616600 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=30.616582147 podStartE2EDuration="30.616582147s" podCreationTimestamp="2026-01-24 07:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:42.612220031 +0000 UTC m=+1163.908325254" watchObservedRunningTime="2026-01-24 07:12:42.616582147 +0000 UTC m=+1163.912687370" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.620313 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerStarted","Data":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.620664 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.805141 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.825803 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cfd8b5875-msfrk" podStartSLOduration=6.825783863 podStartE2EDuration="6.825783863s" podCreationTimestamp="2026-01-24 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:42.657048792 +0000 UTC m=+1163.953154015" watchObservedRunningTime="2026-01-24 07:12:42.825783863 +0000 UTC m=+1164.121889086" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.923978 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924086 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924140 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924225 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.924301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") pod \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\" (UID: \"41c90fb1-ef62-4afe-bda9-4d6422af2ef1\") " Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.925098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.925958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs" (OuterVolumeSpecName: "logs") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.955398 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts" (OuterVolumeSpecName: "scripts") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.955656 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d" (OuterVolumeSpecName: "kube-api-access-f5x7d") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "kube-api-access-f5x7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:42 crc kubenswrapper[4675]: I0124 07:12:42.987047 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.007146 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070858 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070895 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070909 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070918 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070927 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5x7d\" (UniqueName: \"kubernetes.io/projected/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-kube-api-access-f5x7d\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.070937 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.099016 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.102068 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data" (OuterVolumeSpecName: "config-data") pod "41c90fb1-ef62-4afe-bda9-4d6422af2ef1" (UID: "41c90fb1-ef62-4afe-bda9-4d6422af2ef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.172709 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c90fb1-ef62-4afe-bda9-4d6422af2ef1-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.172769 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.379304 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.379640 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677875 4675 generic.go:334] "Generic (PLEG): container finished" podID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerID="afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" exitCode=143 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677915 4675 generic.go:334] "Generic (PLEG): container finished" podID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerID="4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" exitCode=143 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.677980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.678011 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.694019 4675 generic.go:334] "Generic (PLEG): container finished" podID="f54df341-915c-4505-bd2e-81923b07a2be" containerID="d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5" exitCode=0 Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.694122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerDied","Data":"d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723208 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41c90fb1-ef62-4afe-bda9-4d6422af2ef1","Type":"ContainerDied","Data":"583775ff60e4fb2db235511c186b845c09489d7ccd9368474576c53217f77ef8"} Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723275 4675 scope.go:117] "RemoveContainer" containerID="1acce53197bbeb28823c1254d8c7d061aef9082a7171e8f9284a5439ef9f6401" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.723788 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.798791 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.820035 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.850894 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859041 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859076 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859112 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859119 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: E0124 07:12:43.859134 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859141 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859312 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b87366-fdf2-4654-aab4-efa74076b162" containerName="init" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859341 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-log" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.859359 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" containerName="glance-httpd" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.860223 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.864904 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.865640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.869971 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985590 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985656 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985681 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985706 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985766 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985811 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985901 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:43 crc kubenswrapper[4675]: I0124 07:12:43.985940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.066598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093360 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093385 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093524 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093558 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.093603 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.100542 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101135 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101325 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.101479 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.105263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.110437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.113944 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.150608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.152275 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.159634 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.168814 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.171891 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.172143 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" containerID="cri-o://6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" gracePeriod=10 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.187105 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.310822 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.310867 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.734173 4675 generic.go:334] "Generic (PLEG): container finished" podID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerID="6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" exitCode=0 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.734230 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1"} Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.741549 4675 generic.go:334] "Generic (PLEG): container finished" podID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerID="eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53" exitCode=0 Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.741735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerDied","Data":"eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53"} Jan 24 07:12:44 crc kubenswrapper[4675]: I0124 07:12:44.956828 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c90fb1-ef62-4afe-bda9-4d6422af2ef1" path="/var/lib/kubelet/pods/41c90fb1-ef62-4afe-bda9-4d6422af2ef1/volumes" Jan 24 07:12:45 crc kubenswrapper[4675]: I0124 07:12:45.356013 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-td45s" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.443538 4675 scope.go:117] "RemoveContainer" containerID="9ab5498eaf08210b1fdb6ae5aaaa769eff77ad7836cadafdee38c95e0e58d313" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.615552 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.638061 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805066 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v7kb4" event={"ID":"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01","Type":"ContainerDied","Data":"a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d"} Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805106 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a024edd2c89d4dbb0880d1d277e7e0ec183583ef14d8222b2e5089f06997de0d" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.805168 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v7kb4" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.811998 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812103 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812153 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812188 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812203 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812341 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812457 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812476 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812506 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") pod \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\" (UID: \"5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812556 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.812583 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") pod \"f54df341-915c-4505-bd2e-81923b07a2be\" (UID: \"f54df341-915c-4505-bd2e-81923b07a2be\") " Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.813683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs" (OuterVolumeSpecName: "logs") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.835616 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fp9qw" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.835939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fp9qw" event={"ID":"f54df341-915c-4505-bd2e-81923b07a2be","Type":"ContainerDied","Data":"112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae"} Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.836243 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112402427a5eb414fe7cfc4f30de89d1b0218f39fa69ddaa6dd77168312cb7ae" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.837504 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6" (OuterVolumeSpecName: "kube-api-access-jbjt6") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "kube-api-access-jbjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.838619 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts" (OuterVolumeSpecName: "scripts") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.839271 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.841350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts" (OuterVolumeSpecName: "scripts") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.850229 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.865480 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j" (OuterVolumeSpecName: "kube-api-access-x8g5j") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "kube-api-access-x8g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.883667 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.883820 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data" (OuterVolumeSpecName: "config-data") pod "f54df341-915c-4505-bd2e-81923b07a2be" (UID: "f54df341-915c-4505-bd2e-81923b07a2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.894946 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data" (OuterVolumeSpecName: "config-data") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915304 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54df341-915c-4505-bd2e-81923b07a2be-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915332 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8g5j\" (UniqueName: \"kubernetes.io/projected/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-kube-api-access-x8g5j\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915343 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915352 4675 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915365 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915374 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjt6\" (UniqueName: \"kubernetes.io/projected/f54df341-915c-4505-bd2e-81923b07a2be-kube-api-access-jbjt6\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915383 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915396 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915403 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.915412 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54df341-915c-4505-bd2e-81923b07a2be-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.924683 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" (UID: "5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:47 crc kubenswrapper[4675]: I0124 07:12:47.957712 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016315 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016431 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016663 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016692 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.016825 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") pod \"1e547740-a536-4d48-96a0-d22ca8bca63f\" (UID: \"1e547740-a536-4d48-96a0-d22ca8bca63f\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.017499 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.059412 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp" (OuterVolumeSpecName: "kube-api-access-jfxkp") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "kube-api-access-jfxkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.099560 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.105647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119823 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119850 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfxkp\" (UniqueName: \"kubernetes.io/projected/1e547740-a536-4d48-96a0-d22ca8bca63f-kube-api-access-jfxkp\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.119860 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.148590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.165069 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.166311 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config" (OuterVolumeSpecName: "config") pod "1e547740-a536-4d48-96a0-d22ca8bca63f" (UID: "1e547740-a536-4d48-96a0-d22ca8bca63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223441 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223476 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.223487 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e547740-a536-4d48-96a0-d22ca8bca63f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: W0124 07:12:48.257604 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a50c14d_d518_492c_87d1_a194dc075c9f.slice/crio-3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58 WatchSource:0}: Error finding container 3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58: Status 404 returned error can't find the container with id 3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58 Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.257640 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.268088 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324292 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324375 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324470 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324618 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324818 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.324870 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") pod \"4c064344-984b-40fd-9a3b-503d8e1531fd\" (UID: \"4c064344-984b-40fd-9a3b-503d8e1531fd\") " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.325951 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.327528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs" (OuterVolumeSpecName: "logs") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.337379 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts" (OuterVolumeSpecName: "scripts") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.338154 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.341860 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn" (OuterVolumeSpecName: "kube-api-access-5lxbn") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "kube-api-access-5lxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.380832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.383002 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data" (OuterVolumeSpecName: "config-data") pod "4c064344-984b-40fd-9a3b-503d8e1531fd" (UID: "4c064344-984b-40fd-9a3b-503d8e1531fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.427870 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428833 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428851 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lxbn\" (UniqueName: \"kubernetes.io/projected/4c064344-984b-40fd-9a3b-503d8e1531fd-kube-api-access-5lxbn\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428866 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428877 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428889 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c064344-984b-40fd-9a3b-503d8e1531fd-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.428901 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c064344-984b-40fd-9a3b-503d8e1531fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.448205 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.530761 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.856837 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857289 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857305 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857351 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857372 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857378 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857396 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857404 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857417 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857425 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: E0124 07:12:48.857436 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="init" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857443 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="init" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857612 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54df341-915c-4505-bd2e-81923b07a2be" containerName="placement-db-sync" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857630 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-httpd" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857665 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" containerName="glance-log" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857683 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" containerName="dnsmasq-dns" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.857694 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" containerName="keystone-bootstrap" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.858302 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.867779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.867979 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868095 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ddgj4" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868532 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868638 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.868760 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.885637 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.922832 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.926355 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-td45s" event={"ID":"1e547740-a536-4d48-96a0-d22ca8bca63f","Type":"ContainerDied","Data":"9da5085f19a402a7076d2b62d3720d8bab0822f44dc0a613a31fb3c57b813329"} Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928168 4675 scope.go:117] "RemoveContainer" containerID="6ec4d7d14d6db0071695417a61ee609ea081b6e7e348c32a11719b766b0525e1" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.928349 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-td45s" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.931388 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tvkgt" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.933674 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.933901 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.945097 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.945377 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.953710 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.953913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954205 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954630 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.954870 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.955082 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.955592 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:48 crc kubenswrapper[4675]: I0124 07:12:48.993870 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.032144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.056887 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.056969 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057025 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057085 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057144 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057159 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057186 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057214 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057234 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057296 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057406 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.057425 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.063113 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4c064344-984b-40fd-9a3b-503d8e1531fd","Type":"ContainerDied","Data":"7718fcd3b8f83feea63d09a395c7695283cd48482e3fdf877f211fe1be62a3b9"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.063238 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.090035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-public-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.091059 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-internal-tls-certs\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.091365 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-scripts\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.093337 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29fj5\" (UniqueName: \"kubernetes.io/projected/405f0f26-61a4-4420-a147-43d7b86ebb8e-kube-api-access-29fj5\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.101483 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-credential-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.105077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerStarted","Data":"5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.115517 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-config-data\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.120702 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-fernet-keys\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.121356 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58"} Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.127917 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.153270 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-td45s"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.157373 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405f0f26-61a4-4420-a147-43d7b86ebb8e-combined-ca-bundle\") pod \"keystone-5dbffd67c8-k8gzb\" (UID: \"405f0f26-61a4-4420-a147-43d7b86ebb8e\") " pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159301 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159490 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159621 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159785 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.159938 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.160027 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.160124 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.161166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf1f40fb-34b7-494b-bed1-b851a073ac8c-logs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.161303 4675 scope.go:117] "RemoveContainer" containerID="24c458e8c623625a4811d5039f766381aacba2ba8b89fd1c0b0f9eef580b418e" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.164641 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-combined-ca-bundle\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.174100 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g8f6m" podStartSLOduration=2.5760479099999998 podStartE2EDuration="54.174070398s" podCreationTimestamp="2026-01-24 07:11:55 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.631890198 +0000 UTC m=+1117.927995421" lastFinishedPulling="2026-01-24 07:12:48.229912676 +0000 UTC m=+1169.526017909" observedRunningTime="2026-01-24 07:12:49.142578198 +0000 UTC m=+1170.438683431" watchObservedRunningTime="2026-01-24 07:12:49.174070398 +0000 UTC m=+1170.470175621" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.174540 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-config-data\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.185860 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-public-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.193142 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-internal-tls-certs\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.198456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1f40fb-34b7-494b-bed1-b851a073ac8c-scripts\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.210042 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.215372 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnb4w\" (UniqueName: \"kubernetes.io/projected/bf1f40fb-34b7-494b-bed1-b851a073ac8c-kube-api-access-dnb4w\") pod \"placement-5c48f89996-b4jz4\" (UID: \"bf1f40fb-34b7-494b-bed1-b851a073ac8c\") " pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.219926 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.234930 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.246363 4675 scope.go:117] "RemoveContainer" containerID="afe751d1b3859301cd7f99e68f36aac44df1b3cd6e8e0e7276f1aa70ded5f95d" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.261168 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.277637 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.279381 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.286959 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.287640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.292235 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.325598 4675 scope.go:117] "RemoveContainer" containerID="4264ccdcf40ddea59d6e2218fa455a40cfc619138a80e2ca93fd5dfcf946c8e4" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374447 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374473 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374538 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374561 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.374618 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477020 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477379 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477464 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477488 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.477507 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.482097 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.506789 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.507382 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.509996 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.514758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.528825 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.584578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.633826 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.634954 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:12:49 crc kubenswrapper[4675]: I0124 07:12:49.931162 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.120708 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c48f89996-b4jz4"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.267027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.352136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbffd67c8-k8gzb"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.755662 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.983871 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e547740-a536-4d48-96a0-d22ca8bca63f" path="/var/lib/kubelet/pods/1e547740-a536-4d48-96a0-d22ca8bca63f/volumes" Jan 24 07:12:50 crc kubenswrapper[4675]: I0124 07:12:50.984638 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c064344-984b-40fd-9a3b-503d8e1531fd" path="/var/lib/kubelet/pods/4c064344-984b-40fd-9a3b-503d8e1531fd/volumes" Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.295501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"340a60ba91a5c112feda44c5853209a5d92c4929885af3b1133ffcc6a62e4d2c"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.295764 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"e7fa95b4125491815a23fb9aa6ecd0371264f88d9e12286196245a7450a7d2ec"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301357 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbffd67c8-k8gzb" event={"ID":"405f0f26-61a4-4420-a147-43d7b86ebb8e","Type":"ContainerStarted","Data":"735478c8cb0c4c3ea93098a1dbe94f07614dae1ef0bd881871ef6f4f3ad0ebef"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbffd67c8-k8gzb" event={"ID":"405f0f26-61a4-4420-a147-43d7b86ebb8e","Type":"ContainerStarted","Data":"1727be14daac0838f03226de3925fb536dac83daaff543cd783cd74c07aa0e25"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.301437 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.304795 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"e4fd29804bf1cbcfbb72dce66fcc9bef4e155c732e7dbfc8e6239baf96486755"} Jan 24 07:12:51 crc kubenswrapper[4675]: I0124 07:12:51.325322 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dbffd67c8-k8gzb" podStartSLOduration=3.325306843 podStartE2EDuration="3.325306843s" podCreationTimestamp="2026-01-24 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:51.318549011 +0000 UTC m=+1172.614654234" watchObservedRunningTime="2026-01-24 07:12:51.325306843 +0000 UTC m=+1172.621412066" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.337595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.361318 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c48f89996-b4jz4" event={"ID":"bf1f40fb-34b7-494b-bed1-b851a073ac8c","Type":"ContainerStarted","Data":"b8a8290d1b7a8bed3c58145348a0cb96a27c48d50a18f8eb1c5bb69798e76601"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.361607 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.371631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerStarted","Data":"ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.375921 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerStarted","Data":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.387968 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c48f89996-b4jz4" podStartSLOduration=4.387952333 podStartE2EDuration="4.387952333s" podCreationTimestamp="2026-01-24 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:52.381216162 +0000 UTC m=+1173.677321385" watchObservedRunningTime="2026-01-24 07:12:52.387952333 +0000 UTC m=+1173.684057556" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.463333 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-58bxq" podStartSLOduration=5.501753306 podStartE2EDuration="58.463317941s" podCreationTimestamp="2026-01-24 07:11:54 +0000 UTC" firstStartedPulling="2026-01-24 07:11:56.487429463 +0000 UTC m=+1117.783534686" lastFinishedPulling="2026-01-24 07:12:49.448994098 +0000 UTC m=+1170.745099321" observedRunningTime="2026-01-24 07:12:52.427631331 +0000 UTC m=+1173.723736554" watchObservedRunningTime="2026-01-24 07:12:52.463317941 +0000 UTC m=+1173.759423164" Jan 24 07:12:52 crc kubenswrapper[4675]: I0124 07:12:52.465499 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.465492824 podStartE2EDuration="9.465492824s" podCreationTimestamp="2026-01-24 07:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:52.456183389 +0000 UTC m=+1173.752288612" watchObservedRunningTime="2026-01-24 07:12:52.465492824 +0000 UTC m=+1173.761598047" Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.397622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerStarted","Data":"92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e"} Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.398569 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:12:53 crc kubenswrapper[4675]: I0124 07:12:53.421946 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.421922262 podStartE2EDuration="4.421922262s" podCreationTimestamp="2026-01-24 07:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:12:53.419631917 +0000 UTC m=+1174.715737140" watchObservedRunningTime="2026-01-24 07:12:53.421922262 +0000 UTC m=+1174.718027485" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.151202 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.188116 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.188177 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.251842 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.283515 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.313895 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.414005 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:12:54 crc kubenswrapper[4675]: I0124 07:12:54.414899 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:12:55 crc kubenswrapper[4675]: I0124 07:12:55.422268 4675 generic.go:334] "Generic (PLEG): container finished" podID="57270c73-9e5a-4629-8c7a-85123438a067" containerID="5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8" exitCode=0 Jan 24 07:12:55 crc kubenswrapper[4675]: I0124 07:12:55.422458 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerDied","Data":"5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8"} Jan 24 07:12:57 crc kubenswrapper[4675]: I0124 07:12:57.975264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:12:58 crc kubenswrapper[4675]: I0124 07:12:58.447509 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerID="ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c" exitCode=0 Jan 24 07:12:58 crc kubenswrapper[4675]: I0124 07:12:58.447549 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerDied","Data":"ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c"} Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.881052 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.948257 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:12:59 crc kubenswrapper[4675]: I0124 07:12:59.951109 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.017182 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.028257 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.470689 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:00 crc kubenswrapper[4675]: I0124 07:13:00.471157 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.009573 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.049878 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.114917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.114986 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.115043 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") pod \"57270c73-9e5a-4629-8c7a-85123438a067\" (UID: \"57270c73-9e5a-4629-8c7a-85123438a067\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.131115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz" (OuterVolumeSpecName: "kube-api-access-txvlz") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "kube-api-access-txvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.138880 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.187262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57270c73-9e5a-4629-8c7a-85123438a067" (UID: "57270c73-9e5a-4629-8c7a-85123438a067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216065 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216165 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216205 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216223 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216259 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216297 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") pod \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\" (UID: \"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7\") " Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216640 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216893 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216909 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txvlz\" (UniqueName: \"kubernetes.io/projected/57270c73-9e5a-4629-8c7a-85123438a067-kube-api-access-txvlz\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216919 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.216945 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57270c73-9e5a-4629-8c7a-85123438a067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.220911 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25" (OuterVolumeSpecName: "kube-api-access-psx25") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "kube-api-access-psx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.223899 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.243877 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts" (OuterVolumeSpecName: "scripts") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.260836 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.292939 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data" (OuterVolumeSpecName: "config-data") pod "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" (UID: "0d590a0d-6c41-407a-8e89-3e7b9a64a3f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318230 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318269 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318279 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318288 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psx25\" (UniqueName: \"kubernetes.io/projected/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-kube-api-access-psx25\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.318297 4675 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501888 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g8f6m" event={"ID":"57270c73-9e5a-4629-8c7a-85123438a067","Type":"ContainerDied","Data":"5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27"} Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501942 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b259eb76af8e66f76ee1bcfd7ccd3f155f31927bbacf08cb7666192371fbd27" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.501907 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g8f6m" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.504166 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.504189 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505298 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-58bxq" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505298 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-58bxq" event={"ID":"0d590a0d-6c41-407a-8e89-3e7b9a64a3f7","Type":"ContainerDied","Data":"c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc"} Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.505416 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38917fea91ac9c2e54b191a74e3d4deadf5294f5614422ff9a1c2dc377a8acc" Jan 24 07:13:02 crc kubenswrapper[4675]: I0124 07:13:02.871376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.286666 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393531 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.393946 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393972 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.393982 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.393987 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.394153 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" containerName="cinder-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.394178 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="57270c73-9e5a-4629-8c7a-85123438a067" containerName="barbican-db-sync" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.395063 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407405 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407433 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gdfs9" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407627 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.407760 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.439514 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.498839 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:03 crc kubenswrapper[4675]: E0124 07:13:03.517798 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.520186 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.524762 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.525029 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.525226 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9pmfh" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.531988 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerStarted","Data":"8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8"} Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566650 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566821 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566848 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.566910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.603477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.651121 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.664099 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672796 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672857 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672877 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672899 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.672977 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.673002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.681968 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.682412 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.692086 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.693282 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.701453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.721453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783886 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783905 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.783976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.825110 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.890968 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.899947 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900202 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900416 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900691 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.900990 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.901398 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.907102 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-logs\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.915435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data-custom\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.919166 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-combined-ca-bundle\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.942880 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.955863 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.970966 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-config-data\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:03 crc kubenswrapper[4675]: I0124 07:13:03.983437 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6h8\" (UniqueName: \"kubernetes.io/projected/be4ebeb1-6268-4363-948f-8f9aa8f61fe9-kube-api-access-xd6h8\") pod \"barbican-worker-9646bdbd7-ww6xm\" (UID: \"be4ebeb1-6268-4363-948f-8f9aa8f61fe9\") " pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003524 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003692 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003735 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.003758 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.008486 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c5d104c-9f26-49fd-bec5-f62a53503d42-logs\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.016323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-combined-ca-bundle\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.023097 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.062847 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c5d104c-9f26-49fd-bec5-f62a53503d42-config-data-custom\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.063571 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.078481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbwp\" (UniqueName: \"kubernetes.io/projected/3c5d104c-9f26-49fd-bec5-f62a53503d42-kube-api-access-6tbwp\") pod \"barbican-keystone-listener-67c5df6588-xqvmq\" (UID: \"3c5d104c-9f26-49fd-bec5-f62a53503d42\") " pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.125662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"cinder-scheduler-0\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.147779 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9646bdbd7-ww6xm" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.167225 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213250 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213295 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213326 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213420 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.213487 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.252965 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:04 crc kubenswrapper[4675]: E0124 07:13:04.253571 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-dhd8m ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-774db89647-4t4lw" podUID="3871ad3e-a2e3-488f-8b8b-9db66e3af5de" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.317241 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318753 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318864 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318883 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318910 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.318930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.330390 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.335121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.338561 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.340919 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.344618 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.345050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.345358 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.363179 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.363658 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.386217 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.458897 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.474959 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"dnsmasq-dns-774db89647-4t4lw\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526740 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526815 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526875 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526898 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.526967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.527007 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.547575 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" containerID="cri-o://0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.554928 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.555636 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.556027 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" containerID="cri-o://8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.556735 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" containerID="cri-o://9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" gracePeriod=30 Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.574774 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.576311 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.587162 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.592632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.632164 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.633556 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.643959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644085 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644110 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644126 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644179 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.644331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.647790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.652037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.652284 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.657935 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.680440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.681991 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.687853 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.690314 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"cinder-api-0\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.690808 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.711507 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.744371 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749368 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749476 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749503 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749536 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749596 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") pod \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\" (UID: \"3871ad3e-a2e3-488f-8b8b-9db66e3af5de\") " Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749842 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749882 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749916 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.749953 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750113 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750140 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.750197 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.752382 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.753545 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.753992 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config" (OuterVolumeSpecName: "config") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.754479 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.760912 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.762647 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m" (OuterVolumeSpecName: "kube-api-access-dhd8m") pod "3871ad3e-a2e3-488f-8b8b-9db66e3af5de" (UID: "3871ad3e-a2e3-488f-8b8b-9db66e3af5de"). InnerVolumeSpecName "kube-api-access-dhd8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851305 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851377 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851397 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851461 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851507 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851521 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851568 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851579 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851587 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851597 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851605 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.851615 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhd8m\" (UniqueName: \"kubernetes.io/projected/3871ad3e-a2e3-488f-8b8b-9db66e3af5de-kube-api-access-dhd8m\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.858450 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.858560 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.859177 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.859758 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.861323 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.862036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.865167 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.865706 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.874296 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.900933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"barbican-api-6dff87ccf4-s6k69\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.933274 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.975546 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"dnsmasq-dns-6578955fd5-g9hmc\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:04 crc kubenswrapper[4675]: I0124 07:13:04.987555 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.002781 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9646bdbd7-ww6xm"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.169442 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.169941 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" containerID="cri-o://31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" gracePeriod=30 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.170766 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" containerID="cri-o://e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" gracePeriod=30 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.234726 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.236129 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.254476 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365726 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365826 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365863 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365924 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.365955 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.453817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468616 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468663 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468824 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468861 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468889 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.468932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.484812 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-ovndb-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.498535 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-httpd-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.501510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-internal-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.501693 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67c5df6588-xqvmq"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.502311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-public-tls-certs\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.504649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-config\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.515471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd8da22-c828-48e1-bbab-d7360beb8d9f-combined-ca-bundle\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.534779 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fx9\" (UniqueName: \"kubernetes.io/projected/4dd8da22-c828-48e1-bbab-d7360beb8d9f-kube-api-access-b4fx9\") pod \"neutron-77c5f475df-4zndh\" (UID: \"4dd8da22-c828-48e1-bbab-d7360beb8d9f\") " pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.582820 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.589424 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": read tcp 10.217.0.2:35164->10.217.0.154:9696: read: connection reset by peer" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.613945 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"d654f868dd8eaa55d5aced57050491d4bfc469df89f624ef9e17f9340f8f21b9"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.640390 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" exitCode=2 Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.641735 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.649542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.650224 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"ff7a8cc9d923e6a2affe78a00ab26b24d1af2607333da6ad302c7009f39e265a"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.662758 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774db89647-4t4lw" Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.664253 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"5b772e29cb07f523054debab49f022b04588f8f30e3edb348f80f72b74c89242"} Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.767279 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.795088 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-774db89647-4t4lw"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.920828 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:13:05 crc kubenswrapper[4675]: I0124 07:13:05.950882 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:05 crc kubenswrapper[4675]: W0124 07:13:05.971078 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d870cc_b2a9_442f_9779_bf9fbeb8ce2b.slice/crio-40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a WatchSource:0}: Error finding container 40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a: Status 404 returned error can't find the container with id 40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a Jan 24 07:13:05 crc kubenswrapper[4675]: W0124 07:13:05.989120 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice/crio-ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 WatchSource:0}: Error finding container ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Status 404 returned error can't find the container with id ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.394266 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77c5f475df-4zndh"] Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.725482 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerID="e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc" exitCode=0 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.726375 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.726405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerStarted","Data":"40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.777440 4675 generic.go:334] "Generic (PLEG): container finished" podID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" exitCode=0 Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.777686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.808963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.809004 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.881605 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"822c1c454fd0151d45480f18166086e5ee1d458af466ed3b7a6d7251dabea3aa"} Jan 24 07:13:06 crc kubenswrapper[4675]: I0124 07:13:06.973781 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3871ad3e-a2e3-488f-8b8b-9db66e3af5de" path="/var/lib/kubelet/pods/3871ad3e-a2e3-488f-8b8b-9db66e3af5de/volumes" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.052587 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cfd8b5875-msfrk" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.488254 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.812152 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerName="galera" probeResult="failure" output="command timed out" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.861458 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.995931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.995989 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996159 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.996284 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") pod \"26894b11-10b1-4fa5-bd28-2fb4022c467b\" (UID: \"26894b11-10b1-4fa5-bd28-2fb4022c467b\") " Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999493 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerStarted","Data":"2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb"} Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999553 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:07 crc kubenswrapper[4675]: I0124 07:13:07.999575 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.014159 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.044923 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj" (OuterVolumeSpecName: "kube-api-access-8z8wj") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "kube-api-access-8z8wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.061451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.063805 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dff87ccf4-s6k69" podStartSLOduration=4.063794022 podStartE2EDuration="4.063794022s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:08.058242148 +0000 UTC m=+1189.354347371" watchObservedRunningTime="2026-01-24 07:13:08.063794022 +0000 UTC m=+1189.359899255" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.100161 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8wj\" (UniqueName: \"kubernetes.io/projected/26894b11-10b1-4fa5-bd28-2fb4022c467b-kube-api-access-8z8wj\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.100198 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.112959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"84fef0fac78f24b3e6b8786f384aaf6f2cd1af7ab01ee22f772c1f94b033e487"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.113002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"6587db20d5f195ec6ce7bb19011525e5299b19984aa98cc842769ccd592191ef"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.124131 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" exitCode=0 Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.125050 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160483 4675 generic.go:334] "Generic (PLEG): container finished" podID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" exitCode=0 Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160525 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160552 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cfd8b5875-msfrk" event={"ID":"26894b11-10b1-4fa5-bd28-2fb4022c467b","Type":"ContainerDied","Data":"5ce213dfa2f439c1f1ec2ddd0ebc6a5f1f2676cc28fbcde221469753153be07d"} Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160567 4675 scope.go:117] "RemoveContainer" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.160607 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cfd8b5875-msfrk" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.188602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.201647 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.211394 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config" (OuterVolumeSpecName: "config") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.238362 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.272102 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305870 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305905 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.305917 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.342535 4675 scope.go:117] "RemoveContainer" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.348346 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "26894b11-10b1-4fa5-bd28-2fb4022c467b" (UID: "26894b11-10b1-4fa5-bd28-2fb4022c467b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.407174 4675 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26894b11-10b1-4fa5-bd28-2fb4022c467b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.417004 4675 scope.go:117] "RemoveContainer" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: E0124 07:13:08.421511 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": container with ID starting with e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429 not found: ID does not exist" containerID="e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.421552 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429"} err="failed to get container status \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": rpc error: code = NotFound desc = could not find container \"e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429\": container with ID starting with e12810205916a6bf3db2d60835ce7e2c06108e587b7cca108ffc1e9bb886a429 not found: ID does not exist" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.421575 4675 scope.go:117] "RemoveContainer" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: E0124 07:13:08.424381 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": container with ID starting with 31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c not found: ID does not exist" containerID="31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.424418 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c"} err="failed to get container status \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": rpc error: code = NotFound desc = could not find container \"31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c\": container with ID starting with 31dfb8b24af2e497ff9721b73527fba2a7d61af59e2fe6ecd8c493351d58fa5c not found: ID does not exist" Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.566513 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.583598 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cfd8b5875-msfrk"] Jan 24 07:13:08 crc kubenswrapper[4675]: I0124 07:13:08.973796 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" path="/var/lib/kubelet/pods/26894b11-10b1-4fa5-bd28-2fb4022c467b/volumes" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.175897 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.180117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerStarted","Data":"80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.194976 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77c5f475df-4zndh" event={"ID":"4dd8da22-c828-48e1-bbab-d7360beb8d9f","Type":"ContainerStarted","Data":"29aeb18c407661cfcbc9b30181a62734f146aaebc5b1ba39c7ab5883861c6c5b"} Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.195218 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.205642 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" podStartSLOduration=5.205619832 podStartE2EDuration="5.205619832s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:09.200199571 +0000 UTC m=+1190.496304794" watchObservedRunningTime="2026-01-24 07:13:09.205619832 +0000 UTC m=+1190.501725055" Jan 24 07:13:09 crc kubenswrapper[4675]: I0124 07:13:09.988786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerStarted","Data":"551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb"} Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216613 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" containerID="cri-o://247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" gracePeriod=30 Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.216934 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.217209 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" containerID="cri-o://551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" gracePeriod=30 Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.242485 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.24246462 podStartE2EDuration="6.24246462s" podCreationTimestamp="2026-01-24 07:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:10.236771332 +0000 UTC m=+1191.532876575" watchObservedRunningTime="2026-01-24 07:13:10.24246462 +0000 UTC m=+1191.538569843" Jan 24 07:13:10 crc kubenswrapper[4675]: I0124 07:13:10.243050 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77c5f475df-4zndh" podStartSLOduration=5.243044104 podStartE2EDuration="5.243044104s" podCreationTimestamp="2026-01-24 07:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:09.229118819 +0000 UTC m=+1190.525224042" watchObservedRunningTime="2026-01-24 07:13:10.243044104 +0000 UTC m=+1191.539149327" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.248611 4675 generic.go:334] "Generic (PLEG): container finished" podID="c678b4d9-4b62-4225-b60b-06753bb72445" containerID="551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" exitCode=0 Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249344 4675 generic.go:334] "Generic (PLEG): container finished" podID="c678b4d9-4b62-4225-b60b-06753bb72445" containerID="247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" exitCode=143 Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249318 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb"} Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.249429 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de"} Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.302364 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.383647 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386461 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386709 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386821 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.386855 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387041 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387190 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387217 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") pod \"c678b4d9-4b62-4225-b60b-06753bb72445\" (UID: \"c678b4d9-4b62-4225-b60b-06753bb72445\") " Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.387544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs" (OuterVolumeSpecName: "logs") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.388117 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c678b4d9-4b62-4225-b60b-06753bb72445-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.388182 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c678b4d9-4b62-4225-b60b-06753bb72445-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.394852 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh" (OuterVolumeSpecName: "kube-api-access-558lh") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "kube-api-access-558lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.400053 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.401876 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts" (OuterVolumeSpecName: "scripts") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.489839 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.490853 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.490922 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558lh\" (UniqueName: \"kubernetes.io/projected/c678b4d9-4b62-4225-b60b-06753bb72445-kube-api-access-558lh\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.532049 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.543760 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data" (OuterVolumeSpecName: "config-data") pod "c678b4d9-4b62-4225-b60b-06753bb72445" (UID: "c678b4d9-4b62-4225-b60b-06753bb72445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.592875 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.593132 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c678b4d9-4b62-4225-b60b-06753bb72445-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.682784 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683150 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683167 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683201 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683212 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683218 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: E0124 07:13:11.683233 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683240 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683394 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api-log" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683416 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683432 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="26894b11-10b1-4fa5-bd28-2fb4022c467b" containerName="neutron-httpd" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.683441 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" containerName="cinder-api" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.684582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.688635 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.688818 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.713551 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796207 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796262 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796295 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796318 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.796600 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898686 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898740 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898777 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898896 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.898930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.900112 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17e03478-4656-43f8-8d7b-5dfb1ff160a1-logs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.913300 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-combined-ca-bundle\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.922216 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-public-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.923087 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-internal-tls-certs\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.926807 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldqkz\" (UniqueName: \"kubernetes.io/projected/17e03478-4656-43f8-8d7b-5dfb1ff160a1-kube-api-access-ldqkz\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.927056 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data-custom\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:11 crc kubenswrapper[4675]: I0124 07:13:11.929104 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e03478-4656-43f8-8d7b-5dfb1ff160a1-config-data\") pod \"barbican-api-79656b6bf8-nwng8\" (UID: \"17e03478-4656-43f8-8d7b-5dfb1ff160a1\") " pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.011271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.273473 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"172fca2ea66c7a098d930f88a6c77967b30ec77e35d1c627a2ac60c7517c2d53"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.273848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9646bdbd7-ww6xm" event={"ID":"be4ebeb1-6268-4363-948f-8f9aa8f61fe9","Type":"ContainerStarted","Data":"c0520aff7546d042d6808c308677af9396e47ba5ae36c423c45c723550f7c668"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.293862 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"11922f6491c30ae72c9053675e373018b68132bf345912487af17dd08467bb6d"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.293917 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" event={"ID":"3c5d104c-9f26-49fd-bec5-f62a53503d42","Type":"ContainerStarted","Data":"ef45fd3ca461dda1926595bea60474d27f5b84f19ee20d00c31af63403af0442"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.319849 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9646bdbd7-ww6xm" podStartSLOduration=3.481292273 podStartE2EDuration="9.319830904s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.092473216 +0000 UTC m=+1186.388578439" lastFinishedPulling="2026-01-24 07:13:10.931011847 +0000 UTC m=+1192.227117070" observedRunningTime="2026-01-24 07:13:12.299044523 +0000 UTC m=+1193.595149746" watchObservedRunningTime="2026-01-24 07:13:12.319830904 +0000 UTC m=+1193.615936127" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c678b4d9-4b62-4225-b60b-06753bb72445","Type":"ContainerDied","Data":"822c1c454fd0151d45480f18166086e5ee1d458af466ed3b7a6d7251dabea3aa"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.328959 4675 scope.go:117] "RemoveContainer" containerID="551823288a4d17c772de7117b904e28f6926fd39336d7f64ad216ca8563836bb" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.347435 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerStarted","Data":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.350681 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67c5df6588-xqvmq" podStartSLOduration=3.879686423 podStartE2EDuration="9.350663758s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.464291455 +0000 UTC m=+1186.760396678" lastFinishedPulling="2026-01-24 07:13:10.93526879 +0000 UTC m=+1192.231374013" observedRunningTime="2026-01-24 07:13:12.327066968 +0000 UTC m=+1193.623172191" watchObservedRunningTime="2026-01-24 07:13:12.350663758 +0000 UTC m=+1193.646768981" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.404549 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.828277379 podStartE2EDuration="9.404532418s" podCreationTimestamp="2026-01-24 07:13:03 +0000 UTC" firstStartedPulling="2026-01-24 07:13:05.431598275 +0000 UTC m=+1186.727703488" lastFinishedPulling="2026-01-24 07:13:07.007853304 +0000 UTC m=+1188.303958527" observedRunningTime="2026-01-24 07:13:12.379105803 +0000 UTC m=+1193.675211026" watchObservedRunningTime="2026-01-24 07:13:12.404532418 +0000 UTC m=+1193.700637641" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.449986 4675 scope.go:117] "RemoveContainer" containerID="247e02fca5b905cd0d93b258db42bb0be8938ce0ebace5d52207500fcb2fd5de" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.461836 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.485263 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.499432 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.501079 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.512585 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513578 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.513785 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.618709 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79656b6bf8-nwng8"] Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622644 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622698 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.622891 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623002 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623053 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.623111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734756 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734865 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734896 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734924 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734983 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.734998 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.735024 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.735393 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.736114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f870976e-13a5-4226-9eff-18a3244582e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.736364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f870976e-13a5-4226-9eff-18a3244582e8-logs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.745391 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.745820 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-scripts\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.746016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.746303 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.749662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-config-data\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.750239 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f870976e-13a5-4226-9eff-18a3244582e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.764962 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhm9\" (UniqueName: \"kubernetes.io/projected/f870976e-13a5-4226-9eff-18a3244582e8-kube-api-access-hdhm9\") pod \"cinder-api-0\" (UID: \"f870976e-13a5-4226-9eff-18a3244582e8\") " pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.829785 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 24 07:13:12 crc kubenswrapper[4675]: I0124 07:13:12.957149 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c678b4d9-4b62-4225-b60b-06753bb72445" path="/var/lib/kubelet/pods/c678b4d9-4b62-4225-b60b-06753bb72445/volumes" Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.364735 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.374757 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"565fdc104dc9f25ad7bebfbde21cc227956f2152df8f23f7faa79ca875aa4f6e"} Jan 24 07:13:13 crc kubenswrapper[4675]: I0124 07:13:13.374805 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"05fd68b1a31a310dbf5958733a66abbf6b9d393f1b1d6331ab684062134aa61a"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.312227 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.312944 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.313793 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} pod="openstack/horizon-656ff794dd-jx8ld" containerMessage="Container horizon failed startup probe, will be restarted" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.313907 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" containerID="cri-o://7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1" gracePeriod=30 Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.364431 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.365796 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": dial tcp 10.217.0.159:8080: connect: connection refused" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.414982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"0875375a259484ea75e6fab6e15e9ca59a92c2605e4a9463ffd7e91a50ab446d"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.415033 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"1a180ff61c05b88618b110f5bf4206d30d93233c1d6b71ba38239792edb30cc8"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428337 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79656b6bf8-nwng8" event={"ID":"17e03478-4656-43f8-8d7b-5dfb1ff160a1","Type":"ContainerStarted","Data":"a900a5a55271a45364da2d4a0eaa988541fd7e143f83b24101bd2b062bb452c6"} Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428653 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.428673 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.493088 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79656b6bf8-nwng8" podStartSLOduration=3.493066531 podStartE2EDuration="3.493066531s" podCreationTimestamp="2026-01-24 07:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:14.479080003 +0000 UTC m=+1195.775185216" watchObservedRunningTime="2026-01-24 07:13:14.493066531 +0000 UTC m=+1195.789171754" Jan 24 07:13:14 crc kubenswrapper[4675]: I0124 07:13:14.990922 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.059007 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.059280 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" containerID="cri-o://13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" gracePeriod=10 Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.481136 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f870976e-13a5-4226-9eff-18a3244582e8","Type":"ContainerStarted","Data":"bec4e19d24c4e8e817d0b266ad5c4080a43f29c5af6cd880e38f03bad869bcaa"} Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.482712 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.493697 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerID="13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" exitCode=0 Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.494638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42"} Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.514102 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.514083696 podStartE2EDuration="3.514083696s" podCreationTimestamp="2026-01-24 07:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:15.504604928 +0000 UTC m=+1196.800710151" watchObservedRunningTime="2026-01-24 07:13:15.514083696 +0000 UTC m=+1196.810188919" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.669863 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819603 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819925 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.819993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820066 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.820189 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") pod \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\" (UID: \"ce054cb0-d2ad-4960-9078-d977ce3ca9e6\") " Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.834974 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l" (OuterVolumeSpecName: "kube-api-access-xjw4l") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "kube-api-access-xjw4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.909322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.909755 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config" (OuterVolumeSpecName: "config") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.919124 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922570 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922595 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922604 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjw4l\" (UniqueName: \"kubernetes.io/projected/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-kube-api-access-xjw4l\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.922614 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.925702 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:15 crc kubenswrapper[4675]: I0124 07:13:15.930653 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce054cb0-d2ad-4960-9078-d977ce3ca9e6" (UID: "ce054cb0-d2ad-4960-9078-d977ce3ca9e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.024587 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.024644 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce054cb0-d2ad-4960-9078-d977ce3ca9e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505692 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" event={"ID":"ce054cb0-d2ad-4960-9078-d977ce3ca9e6","Type":"ContainerDied","Data":"b1d2359f9bd1730fd38d06c61fbb2923f790bf4bfe4ea9760488e068602ba6b1"} Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505781 4675 scope.go:117] "RemoveContainer" containerID="13389cde8deb313942b430bed79fe311467f7a70f6bafc3afe90b1cc763d7d42" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.505748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-zrk79" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.545862 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.558020 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-zrk79"] Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.614799 4675 scope.go:117] "RemoveContainer" containerID="a357ef50188a1acd7da313f1d5fc0be108c9ca15168c4882b220cb0612f377a4" Jan 24 07:13:16 crc kubenswrapper[4675]: I0124 07:13:16.953411 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" path="/var/lib/kubelet/pods/ce054cb0-d2ad-4960-9078-d977ce3ca9e6/volumes" Jan 24 07:13:17 crc kubenswrapper[4675]: I0124 07:13:17.638621 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:17 crc kubenswrapper[4675]: I0124 07:13:17.764607 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:18 crc kubenswrapper[4675]: I0124 07:13:18.188838 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.043121 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.743384 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 07:13:19 crc kubenswrapper[4675]: I0124 07:13:19.796999 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:20 crc kubenswrapper[4675]: I0124 07:13:20.548362 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" containerID="cri-o://c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" gracePeriod=30 Jan 24 07:13:20 crc kubenswrapper[4675]: I0124 07:13:20.548522 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" containerID="cri-o://a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.000446 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79656b6bf8-nwng8" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.100808 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.101088 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" containerID="cri-o://e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.101904 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" containerID="cri-o://2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" gracePeriod=30 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.107823 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.570991 4675 generic.go:334] "Generic (PLEG): container finished" podID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerID="e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" exitCode=143 Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.571039 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8"} Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.626381 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.820846 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:13:21 crc kubenswrapper[4675]: I0124 07:13:21.874404 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c48f89996-b4jz4" Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.456475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dbffd67c8-k8gzb" Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.580242 4675 generic.go:334] "Generic (PLEG): container finished" podID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" exitCode=0 Jan 24 07:13:22 crc kubenswrapper[4675]: I0124 07:13:22.580317 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.493679 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579701 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579810 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579848 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.579895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") pod \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\" (UID: \"46bd6c12-adaa-4fef-9ed2-4111468e21a4\") " Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.580356 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.590965 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts" (OuterVolumeSpecName: "scripts") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.592908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.593478 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5" (OuterVolumeSpecName: "kube-api-access-mt4p5") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "kube-api-access-mt4p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628620 4675 generic.go:334] "Generic (PLEG): container finished" podID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" exitCode=0 Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628669 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628699 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"46bd6c12-adaa-4fef-9ed2-4111468e21a4","Type":"ContainerDied","Data":"d654f868dd8eaa55d5aced57050491d4bfc469df89f624ef9e17f9340f8f21b9"} Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628735 4675 scope.go:117] "RemoveContainer" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.628901 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.673988 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700309 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700338 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt4p5\" (UniqueName: \"kubernetes.io/projected/46bd6c12-adaa-4fef-9ed2-4111468e21a4-kube-api-access-mt4p5\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700348 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700358 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.700369 4675 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46bd6c12-adaa-4fef-9ed2-4111468e21a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.721305 4675 scope.go:117] "RemoveContainer" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.766817 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data" (OuterVolumeSpecName: "config-data") pod "46bd6c12-adaa-4fef-9ed2-4111468e21a4" (UID: "46bd6c12-adaa-4fef-9ed2-4111468e21a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.773881 4675 scope.go:117] "RemoveContainer" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: E0124 07:13:23.774614 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": container with ID starting with a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6 not found: ID does not exist" containerID="a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.774693 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6"} err="failed to get container status \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": rpc error: code = NotFound desc = could not find container \"a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6\": container with ID starting with a6ee48c69b825f7a91937665e3a2809bfc5ac91404261a780503fe2f4f486dd6 not found: ID does not exist" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.774767 4675 scope.go:117] "RemoveContainer" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: E0124 07:13:23.775091 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": container with ID starting with c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055 not found: ID does not exist" containerID="c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.775139 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055"} err="failed to get container status \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": rpc error: code = NotFound desc = could not find container \"c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055\": container with ID starting with c2790503594d45da1407e3ed7364adc68604fd1cea7e2fbd32b957d60e629055 not found: ID does not exist" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.802044 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46bd6c12-adaa-4fef-9ed2-4111468e21a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.965925 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:23 crc kubenswrapper[4675]: I0124 07:13:23.978383 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002040 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002437 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002453 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002463 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002470 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002494 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002501 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.002517 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="init" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002523 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="init" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002832 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="cinder-scheduler" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002849 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce054cb0-d2ad-4960-9078-d977ce3ca9e6" containerName="dnsmasq-dns" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.002860 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" containerName="probe" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.005886 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.007797 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.027482 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.105942 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.105993 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106142 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106194 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.106231 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208052 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208167 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208220 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208258 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208325 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208349 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.208838 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.219961 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.220077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.221426 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.242377 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2st\" (UniqueName: \"kubernetes.io/projected/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-kube-api-access-rn2st\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.242625 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31cacad0-4d32-4300-8bdc-bbf15fcd77ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31cacad0-4d32-4300-8bdc-bbf15fcd77ac\") " pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.354519 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.378092 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.381576 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.396224 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fcqz5" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.398506 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.402009 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.486831 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522798 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522871 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522902 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.522934 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625695 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625786 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625814 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.625845 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.635547 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.642584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.645476 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.671519 4675 projected.go:194] Error preparing data for projected volume kube-api-access-j6zbn for pod openstack/openstackclient: failed to fetch token: pod "openstackclient" not found Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.671578 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn podName:64489b46-b7cd-4c35-976d-c8397add424a nodeName:}" failed. No retries permitted until 2026-01-24 07:13:25.171560677 +0000 UTC m=+1206.467665890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j6zbn" (UniqueName: "kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn") pod "openstackclient" (UID: "64489b46-b7cd-4c35-976d-c8397add424a") : failed to fetch token: pod "openstackclient" not found Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.671849 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: E0124 07:13:24.678337 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-j6zbn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="64489b46-b7cd-4c35-976d-c8397add424a" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.692412 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.716955 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.720781 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.723582 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839770 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.839830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941279 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941484 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.941510 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.942432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.956463 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.957424 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-openstack-config-secret\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.964004 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46bd6c12-adaa-4fef-9ed2-4111468e21a4" path="/var/lib/kubelet/pods/46bd6c12-adaa-4fef-9ed2-4111468e21a4/volumes" Jan 24 07:13:24 crc kubenswrapper[4675]: I0124 07:13:24.969116 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpk7\" (UniqueName: \"kubernetes.io/projected/2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb-kube-api-access-5lpk7\") pod \"openstackclient\" (UID: \"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb\") " pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.037475 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.046133 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.255001 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") pod \"openstackclient\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: E0124 07:13:25.259154 4675 projected.go:194] Error preparing data for projected volume kube-api-access-j6zbn for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (64489b46-b7cd-4c35-976d-c8397add424a) does not match the UID in record. The object might have been deleted and then recreated Jan 24 07:13:25 crc kubenswrapper[4675]: E0124 07:13:25.259366 4675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn podName:64489b46-b7cd-4c35-976d-c8397add424a nodeName:}" failed. No retries permitted until 2026-01-24 07:13:26.259311433 +0000 UTC m=+1207.555416656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j6zbn" (UniqueName: "kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn") pod "openstackclient" (UID: "64489b46-b7cd-4c35-976d-c8397add424a") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (64489b46-b7cd-4c35-976d-c8397add424a) does not match the UID in record. The object might have been deleted and then recreated Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.535373 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.594573 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:44402->10.217.0.164:9311: read: connection reset by peer" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.594588 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dff87ccf4-s6k69" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:44390->10.217.0.164:9311: read: connection reset by peer" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.632498 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.717890 4675 generic.go:334] "Generic (PLEG): container finished" podID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerID="2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" exitCode=0 Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.718218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.719645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb","Type":"ContainerStarted","Data":"401ec6dfa8313b57ca54d42eb230a3fde3c398209ea32fd8f590b1acc9c09400"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.724579 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.724798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"493266e067a9bfb24e67415a7a14e73736f5b13343aedd0dc2c00fe60dbecaa9"} Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.756866 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.763949 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="64489b46-b7cd-4c35-976d-c8397add424a" podUID="2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878157 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878294 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.878567 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") pod \"64489b46-b7cd-4c35-976d-c8397add424a\" (UID: \"64489b46-b7cd-4c35-976d-c8397add424a\") " Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.879266 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zbn\" (UniqueName: \"kubernetes.io/projected/64489b46-b7cd-4c35-976d-c8397add424a-kube-api-access-j6zbn\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.879804 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.891620 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.906826 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "64489b46-b7cd-4c35-976d-c8397add424a" (UID: "64489b46-b7cd-4c35-976d-c8397add424a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981019 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981044 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/64489b46-b7cd-4c35-976d-c8397add424a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:25 crc kubenswrapper[4675]: I0124 07:13:25.981054 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64489b46-b7cd-4c35-976d-c8397add424a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.166687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286002 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286046 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286101 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286173 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286237 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") pod \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\" (UID: \"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f\") " Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.286890 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs" (OuterVolumeSpecName: "logs") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.290905 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.292054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd" (OuterVolumeSpecName: "kube-api-access-g9mvd") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "kube-api-access-g9mvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.321609 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.356814 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data" (OuterVolumeSpecName: "config-data") pod "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" (UID: "81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388344 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388380 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9mvd\" (UniqueName: \"kubernetes.io/projected/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-kube-api-access-g9mvd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388392 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388401 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.388410 4675 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.754105 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"5575dec237a9f5d9480ea08e2ecf8aeed6f01b17856974322269973a088ec027"} Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760703 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dff87ccf4-s6k69" event={"ID":"81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f","Type":"ContainerDied","Data":"ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196"} Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760763 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dff87ccf4-s6k69" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760776 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.760772 4675 scope.go:117] "RemoveContainer" containerID="2ab48c35f8779185c47e215c2aabb98ee054565141a51e8dee3bcb4b8f15cbbb" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.780034 4675 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="64489b46-b7cd-4c35-976d-c8397add424a" podUID="2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.791179 4675 scope.go:117] "RemoveContainer" containerID="e8bccace6fcb2244f7e0a94668ba27679d5c9bd93341c87183cda6405d20bba8" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.800589 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.809984 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dff87ccf4-s6k69"] Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.952431 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64489b46-b7cd-4c35-976d-c8397add424a" path="/var/lib/kubelet/pods/64489b46-b7cd-4c35-976d-c8397add424a/volumes" Jan 24 07:13:26 crc kubenswrapper[4675]: I0124 07:13:26.952857 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" path="/var/lib/kubelet/pods/81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f/volumes" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.015880 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79656b6bf8-nwng8" podUID="17e03478-4656-43f8-8d7b-5dfb1ff160a1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.540213 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.855780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31cacad0-4d32-4300-8bdc-bbf15fcd77ac","Type":"ContainerStarted","Data":"8f81671f1a461ee9b613e2b6c9f1be3c3719200523723bddb7e92d631dfe28f8"} Jan 24 07:13:27 crc kubenswrapper[4675]: I0124 07:13:27.878788 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.878770521 podStartE2EDuration="4.878770521s" podCreationTimestamp="2026-01-24 07:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:27.876536768 +0000 UTC m=+1209.172642001" watchObservedRunningTime="2026-01-24 07:13:27.878770521 +0000 UTC m=+1209.174875744" Jan 24 07:13:29 crc kubenswrapper[4675]: I0124 07:13:29.355192 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.646971 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:31 crc kubenswrapper[4675]: E0124 07:13:31.648295 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.648310 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: E0124 07:13:31.648366 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.648373 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.650744 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api-log" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.650773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac4ed7-04e2-420b-b6cd-4021c5cd1b9f" containerName="barbican-api" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.656983 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.667433 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.667762 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.668489 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708294 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708437 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.708527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.709865 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713175 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713339 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.713508 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815877 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815911 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815951 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.815973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816004 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-log-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.816616 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1443f8-8586-4757-9637-378c7c88787d-run-httpd\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.824010 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-internal-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.824159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-combined-ca-bundle\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.827103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-config-data\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.829501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1443f8-8586-4757-9637-378c7c88787d-public-tls-certs\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.831404 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-etc-swift\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.833139 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j4l\" (UniqueName: \"kubernetes.io/projected/fa1443f8-8586-4757-9637-378c7c88787d-kube-api-access-t9j4l\") pod \"swift-proxy-5875964765-b68mp\" (UID: \"fa1443f8-8586-4757-9637-378c7c88787d\") " pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:31 crc kubenswrapper[4675]: I0124 07:13:31.996870 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.549112 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.925673 4675 generic.go:334] "Generic (PLEG): container finished" podID="62b7e06f-b840-408c-b026-a086b975812f" containerID="8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" exitCode=137 Jan 24 07:13:34 crc kubenswrapper[4675]: I0124 07:13:34.925839 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8"} Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.605290 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77c5f475df-4zndh" Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.680682 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.681223 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cddfd9dd-rbhzj" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" containerID="cri-o://5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" gracePeriod=30 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.681296 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cddfd9dd-rbhzj" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" containerID="cri-o://7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" gracePeriod=30 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.960184 4675 generic.go:334] "Generic (PLEG): container finished" podID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerID="7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" exitCode=0 Jan 24 07:13:35 crc kubenswrapper[4675]: I0124 07:13:35.960838 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8"} Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.357990 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462282 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462380 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462423 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462515 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462557 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462618 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.462650 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") pod \"62b7e06f-b840-408c-b026-a086b975812f\" (UID: \"62b7e06f-b840-408c-b026-a086b975812f\") " Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.464476 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.464512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.469621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts" (OuterVolumeSpecName: "scripts") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.470643 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn" (OuterVolumeSpecName: "kube-api-access-pb8hn") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "kube-api-access-pb8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.503087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.546291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data" (OuterVolumeSpecName: "config-data") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564207 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564238 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564247 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564256 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62b7e06f-b840-408c-b026-a086b975812f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564264 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.564271 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb8hn\" (UniqueName: \"kubernetes.io/projected/62b7e06f-b840-408c-b026-a086b975812f-kube-api-access-pb8hn\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.577835 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62b7e06f-b840-408c-b026-a086b975812f" (UID: "62b7e06f-b840-408c-b026-a086b975812f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.630241 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5875964765-b68mp"] Jan 24 07:13:38 crc kubenswrapper[4675]: I0124 07:13:38.665518 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b7e06f-b840-408c-b026-a086b975812f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024029 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62b7e06f-b840-408c-b026-a086b975812f","Type":"ContainerDied","Data":"0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024088 4675 scope.go:117] "RemoveContainer" containerID="8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.024216 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.037070 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb","Type":"ContainerStarted","Data":"45c28282008767484a5f0789dd7a4805559890b72e9c3b147dc8c5ceaff39827"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.050643 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"df36ca04e0b2e233bc6b4351aa2079ede41ed02087949b0bc03afaf1ce76fa26"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.050687 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"4d45c7ff32645e7c68a19063b5f235827135ecd5d3374c900e986f5e650ea018"} Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.071829 4675 scope.go:117] "RemoveContainer" containerID="9de1cb80f6e48e728da957f71f2dc3c5adb5e1352f3a0c6647494ce1109b92eb" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.075972 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.081291 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.113263 4675 scope.go:117] "RemoveContainer" containerID="0ff4885d5dbc856385bb82616203fe2d9ca31f546f0610abde226a41b839fc48" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.138488 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.923873339 podStartE2EDuration="15.138465415s" podCreationTimestamp="2026-01-24 07:13:24 +0000 UTC" firstStartedPulling="2026-01-24 07:13:25.686069885 +0000 UTC m=+1206.982175108" lastFinishedPulling="2026-01-24 07:13:37.900661961 +0000 UTC m=+1219.196767184" observedRunningTime="2026-01-24 07:13:39.095858058 +0000 UTC m=+1220.391963281" watchObservedRunningTime="2026-01-24 07:13:39.138465415 +0000 UTC m=+1220.434570638" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.203565 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.204585 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.204680 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.204878 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.204975 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: E0124 07:13:39.205063 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205116 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205351 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="sg-core" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205416 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="proxy-httpd" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.205480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b7e06f-b840-408c-b026-a086b975812f" containerName="ceilometer-notification-agent" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.208370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.214514 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.215461 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.218579 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.349235 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350282 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350487 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350603 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350750 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.350873 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452050 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452842 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.452985 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453094 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453240 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453323 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.453808 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.454336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.464905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.465960 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.467080 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.476954 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.480834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"ceilometer-0\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.548357 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:39 crc kubenswrapper[4675]: I0124 07:13:39.791005 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.059711 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5875964765-b68mp" event={"ID":"fa1443f8-8586-4757-9637-378c7c88787d","Type":"ContainerStarted","Data":"e27b28b41b3b1c983e1b3eb7f93898fa6d9cea3754d1d74ecbdf0f6ae277a284"} Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.060767 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.060793 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.084030 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.111964 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5875964765-b68mp" podStartSLOduration=9.111946885 podStartE2EDuration="9.111946885s" podCreationTimestamp="2026-01-24 07:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:40.101763329 +0000 UTC m=+1221.397868562" watchObservedRunningTime="2026-01-24 07:13:40.111946885 +0000 UTC m=+1221.408052108" Jan 24 07:13:40 crc kubenswrapper[4675]: I0124 07:13:40.957932 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b7e06f-b840-408c-b026-a086b975812f" path="/var/lib/kubelet/pods/62b7e06f-b840-408c-b026-a086b975812f/volumes" Jan 24 07:13:41 crc kubenswrapper[4675]: I0124 07:13:41.073630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"0c61230a10f189c874d0a49db9b6e2672e6d430929f425e547950a4f18245bc4"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.086916 4675 generic.go:334] "Generic (PLEG): container finished" podID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerID="5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" exitCode=0 Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.087163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.094548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.493023 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616374 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616611 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616661 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.616685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") pod \"d7b4aa87-c092-4624-bd65-c9393dd36098\" (UID: \"d7b4aa87-c092-4624-bd65-c9393dd36098\") " Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.643766 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f" (OuterVolumeSpecName: "kube-api-access-hw44f") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "kube-api-access-hw44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.643913 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.720256 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw44f\" (UniqueName: \"kubernetes.io/projected/d7b4aa87-c092-4624-bd65-c9393dd36098-kube-api-access-hw44f\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.720305 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.745244 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.745334 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.758953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config" (OuterVolumeSpecName: "config") pod "d7b4aa87-c092-4624-bd65-c9393dd36098" (UID: "d7b4aa87-c092-4624-bd65-c9393dd36098"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821413 4675 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821443 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:42 crc kubenswrapper[4675]: I0124 07:13:42.821455 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d7b4aa87-c092-4624-bd65-c9393dd36098-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cddfd9dd-rbhzj" event={"ID":"d7b4aa87-c092-4624-bd65-c9393dd36098","Type":"ContainerDied","Data":"9a865440f381a7417cf12468f043671da2b8b23ef036f738147c217bd9897103"} Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104755 4675 scope.go:117] "RemoveContainer" containerID="7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.104476 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cddfd9dd-rbhzj" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.107561 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.166625 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.171208 4675 scope.go:117] "RemoveContainer" containerID="5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.182277 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67cddfd9dd-rbhzj"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.218830 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:43 crc kubenswrapper[4675]: E0124 07:13:43.219193 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219211 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: E0124 07:13:43.219236 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219243 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219387 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-httpd" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.219414 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" containerName="neutron-api" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.220012 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.242295 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.325789 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.327887 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.335174 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.335279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.352165 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.417855 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.418975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.425071 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436420 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436489 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436545 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.436596 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.437505 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.452749 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.470311 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"nova-api-db-create-p847h\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.530367 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.536694 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.536761 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.538900 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.538968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539026 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.539917 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.572763 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"nova-cell0-db-create-2gcsv\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.599741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.642883 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.642976 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.643000 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.643030 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.644140 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.658146 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.668378 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"nova-api-47cc-account-create-update-qbjjs\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.690034 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.692526 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.702132 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.727775 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.744529 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.744568 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.747402 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.777471 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"nova-cell1-db-create-4z8kz\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.818522 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.819679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.823706 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.843862 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.850179 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.850279 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.952975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956535 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956585 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.956650 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.957588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.964570 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:43 crc kubenswrapper[4675]: I0124 07:13:43.990582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"nova-cell0-1e3e-account-create-update-z84p9\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.038951 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.063925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.064016 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.069464 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.092801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"nova-cell1-aab6-account-create-update-4zgt4\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.144417 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.165457 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.198565 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.309226 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.655330 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.664626 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" containerID="cri-o://82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" gracePeriod=30 Jan 24 07:13:44 crc kubenswrapper[4675]: I0124 07:13:44.663425 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" containerID="cri-o://5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" gracePeriod=30 Jan 24 07:13:44 crc kubenswrapper[4675]: E0124 07:13:44.697608 4675 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice/crio-ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Error finding container ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196: Status 404 returned error can't find the container with id ba618e3b90eff5a06546979f9c7ceac9a3397a8a6ffbfecd2ebc67cead2fc196 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.034359 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b4aa87-c092-4624-bd65-c9393dd36098" path="/var/lib/kubelet/pods/d7b4aa87-c092-4624-bd65-c9393dd36098/volumes" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.035236 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.058500 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.098241 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.182934 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerStarted","Data":"ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.182979 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerStarted","Data":"15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.193026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerStarted","Data":"d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.197909 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerStarted","Data":"d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.201388 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerStarted","Data":"dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.201439 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerStarted","Data":"bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.202887 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerStarted","Data":"8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.211426 4675 generic.go:334] "Generic (PLEG): container finished" podID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerID="7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1" exitCode=137 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.211758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerDied","Data":"7f1a6675a950b42c9ecbccbf0a4fb33df3e31b81c67e7165b82b4f582a3574f1"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.215012 4675 generic.go:334] "Generic (PLEG): container finished" podID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" exitCode=143 Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.215057 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.222952 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-p847h" podStartSLOduration=2.222926918 podStartE2EDuration="2.222926918s" podCreationTimestamp="2026-01-24 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:45.1998157 +0000 UTC m=+1226.495920923" watchObservedRunningTime="2026-01-24 07:13:45.222926918 +0000 UTC m=+1226.519032131" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.267401 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-2gcsv" podStartSLOduration=2.26737279 podStartE2EDuration="2.26737279s" podCreationTimestamp="2026-01-24 07:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:45.244121309 +0000 UTC m=+1226.540226532" watchObservedRunningTime="2026-01-24 07:13:45.26737279 +0000 UTC m=+1226.563478013" Jan 24 07:13:45 crc kubenswrapper[4675]: I0124 07:13:45.290116 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:13:45 crc kubenswrapper[4675]: E0124 07:13:45.303260 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-conmon-8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-conmon-7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-7d3edeae517b10119dce0060d70818655886c052072ae9c23aefdb65eed859a8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-0ad4338e6f939f6bda642b2d5397708669ef3b6004444834c598ae8f3b747800\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b4aa87_c092_4624_bd65_c9393dd36098.slice/crio-5ad36d71e3b73e26c65c39abbe42993d524b7ec51d9571439c232b730197cdc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b7e06f_b840_408c_b026_a086b975812f.slice/crio-8617cf90ae125e2309b0341045ddf13613f8df2ed43bfb3a2c647c1b2e5efed8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a50c14d_d518_492c_87d1_a194dc075c9f.slice/crio-conmon-5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ac4ed7_04e2_420b_b6cd_4021c5cd1b9f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64489b46_b7cd_4c35_976d_c8397add424a.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:13:45 crc kubenswrapper[4675]: W0124 07:13:45.337382 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb102798_6f2c_4cf4_b697_03cc94f9174a.slice/crio-32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4 WatchSource:0}: Error finding container 32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4: Status 404 returned error can't find the container with id 32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.224114 4675 generic.go:334] "Generic (PLEG): container finished" podID="db48a3bd-546d-4f52-a9bc-340e03790730" containerID="7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.224291 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerDied","Data":"7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.226228 4675 generic.go:334] "Generic (PLEG): container finished" podID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerID="dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.226326 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerDied","Data":"dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.228349 4675 generic.go:334] "Generic (PLEG): container finished" podID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerID="13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.228405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerDied","Data":"13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.231038 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-656ff794dd-jx8ld" event={"ID":"4b7e7730-0a42-48b0-bb7e-da95eb915126","Type":"ContainerStarted","Data":"4364811164cf790d506ed582ab48a71d2c08ea8c39feab37977f45c719a19230"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.233680 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerStarted","Data":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.233844 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" containerID="cri-o://87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234074 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234118 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" containerID="cri-o://d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234160 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" containerID="cri-o://1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.234193 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" containerID="cri-o://e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.241177 4675 generic.go:334] "Generic (PLEG): container finished" podID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerID="ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.241608 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerDied","Data":"ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.254647 4675 generic.go:334] "Generic (PLEG): container finished" podID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerID="e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.254736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerDied","Data":"e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.257948 4675 generic.go:334] "Generic (PLEG): container finished" podID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerID="92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce" exitCode=0 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.258053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerDied","Data":"92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.258129 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerStarted","Data":"32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4"} Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.298376 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.119006605 podStartE2EDuration="7.298362396s" podCreationTimestamp="2026-01-24 07:13:39 +0000 UTC" firstStartedPulling="2026-01-24 07:13:40.103802059 +0000 UTC m=+1221.399907282" lastFinishedPulling="2026-01-24 07:13:45.28315785 +0000 UTC m=+1226.579263073" observedRunningTime="2026-01-24 07:13:46.296008939 +0000 UTC m=+1227.592114162" watchObservedRunningTime="2026-01-24 07:13:46.298362396 +0000 UTC m=+1227.594467619" Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.885559 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.885950 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" containerID="cri-o://a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" gracePeriod=30 Jan 24 07:13:46 crc kubenswrapper[4675]: I0124 07:13:46.886169 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" containerID="cri-o://92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" gracePeriod=30 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.033373 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.038150 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5875964765-b68mp" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.271405 4675 generic.go:334] "Generic (PLEG): container finished" podID="95652bba-0800-475e-9f2f-20e64195d523" containerID="a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" exitCode=143 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.271486 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274836 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" exitCode=0 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274860 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" exitCode=2 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.274868 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" exitCode=0 Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275696 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275821 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.275840 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.833045 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.892099 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") pod \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.904579 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq" (OuterVolumeSpecName: "kube-api-access-spbfq") pod "d4d4a29e-dbe1-4145-b0af-afa0c77172b9" (UID: "d4d4a29e-dbe1-4145-b0af-afa0c77172b9"). InnerVolumeSpecName "kube-api-access-spbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:47 crc kubenswrapper[4675]: I0124 07:13:47.998098 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") pod \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\" (UID: \"d4d4a29e-dbe1-4145-b0af-afa0c77172b9\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.002795 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4d4a29e-dbe1-4145-b0af-afa0c77172b9" (UID: "d4d4a29e-dbe1-4145-b0af-afa0c77172b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.004372 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spbfq\" (UniqueName: \"kubernetes.io/projected/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-kube-api-access-spbfq\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.106061 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4d4a29e-dbe1-4145-b0af-afa0c77172b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.272288 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4z8kz" event={"ID":"9b6ffe68-4ebd-47e8-8b11-20050394e5b7","Type":"ContainerDied","Data":"d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78"} Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325297 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d941027a5ad5be885d89fdff0b43597296adafeb003615ecbc4f53f02190bd78" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.325360 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4z8kz" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329296 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p847h" event={"ID":"d4d4a29e-dbe1-4145-b0af-afa0c77172b9","Type":"ContainerDied","Data":"15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2"} Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329334 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c2743490cbd27546d69a8d6fb2ef4e7870e6c547960bdf525801f45200d4c2" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.329380 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p847h" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.363587 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.376756 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.397925 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.406372 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.414901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") pod \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.415174 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") pod \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\" (UID: \"9b6ffe68-4ebd-47e8-8b11-20050394e5b7\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.417262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b6ffe68-4ebd-47e8-8b11-20050394e5b7" (UID: "9b6ffe68-4ebd-47e8-8b11-20050394e5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.428536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr" (OuterVolumeSpecName: "kube-api-access-hs7nr") pod "9b6ffe68-4ebd-47e8-8b11-20050394e5b7" (UID: "9b6ffe68-4ebd-47e8-8b11-20050394e5b7"). InnerVolumeSpecName "kube-api-access-hs7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518133 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") pod \"db48a3bd-546d-4f52-a9bc-340e03790730\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518475 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") pod \"cb102798-6f2c-4cf4-b697-03cc94f9174a\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.518643 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") pod \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519432 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") pod \"f8458b8a-6770-4e62-9848-55a9b142cb8c\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520032 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") pod \"cb102798-6f2c-4cf4-b697-03cc94f9174a\" (UID: \"cb102798-6f2c-4cf4-b697-03cc94f9174a\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520169 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") pod \"f8458b8a-6770-4e62-9848-55a9b142cb8c\" (UID: \"f8458b8a-6770-4e62-9848-55a9b142cb8c\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520298 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") pod \"db48a3bd-546d-4f52-a9bc-340e03790730\" (UID: \"db48a3bd-546d-4f52-a9bc-340e03790730\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.520670 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") pod \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\" (UID: \"c962c5e1-a244-4690-935e-9a7b0d5fc7e4\") " Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb102798-6f2c-4cf4-b697-03cc94f9174a" (UID: "cb102798-6f2c-4cf4-b697-03cc94f9174a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.519949 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8458b8a-6770-4e62-9848-55a9b142cb8c" (UID: "f8458b8a-6770-4e62-9848-55a9b142cb8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.522509 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9" (OuterVolumeSpecName: "kube-api-access-cr8d9") pod "db48a3bd-546d-4f52-a9bc-340e03790730" (UID: "db48a3bd-546d-4f52-a9bc-340e03790730"). InnerVolumeSpecName "kube-api-access-cr8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.523027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db48a3bd-546d-4f52-a9bc-340e03790730" (UID: "db48a3bd-546d-4f52-a9bc-340e03790730"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.523605 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c962c5e1-a244-4690-935e-9a7b0d5fc7e4" (UID: "c962c5e1-a244-4690-935e-9a7b0d5fc7e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525569 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679" (OuterVolumeSpecName: "kube-api-access-bg679") pod "f8458b8a-6770-4e62-9848-55a9b142cb8c" (UID: "f8458b8a-6770-4e62-9848-55a9b142cb8c"). InnerVolumeSpecName "kube-api-access-bg679". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525825 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw" (OuterVolumeSpecName: "kube-api-access-7lwdw") pod "cb102798-6f2c-4cf4-b697-03cc94f9174a" (UID: "cb102798-6f2c-4cf4-b697-03cc94f9174a"). InnerVolumeSpecName "kube-api-access-7lwdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.525969 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8" (OuterVolumeSpecName: "kube-api-access-7nmq8") pod "c962c5e1-a244-4690-935e-9a7b0d5fc7e4" (UID: "c962c5e1-a244-4690-935e-9a7b0d5fc7e4"). InnerVolumeSpecName "kube-api-access-7nmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.535972 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536189 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb102798-6f2c-4cf4-b697-03cc94f9174a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536296 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmq8\" (UniqueName: \"kubernetes.io/projected/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-kube-api-access-7nmq8\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536372 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8458b8a-6770-4e62-9848-55a9b142cb8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536450 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lwdw\" (UniqueName: \"kubernetes.io/projected/cb102798-6f2c-4cf4-b697-03cc94f9174a-kube-api-access-7lwdw\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536537 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7nr\" (UniqueName: \"kubernetes.io/projected/9b6ffe68-4ebd-47e8-8b11-20050394e5b7-kube-api-access-hs7nr\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536611 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg679\" (UniqueName: \"kubernetes.io/projected/f8458b8a-6770-4e62-9848-55a9b142cb8c-kube-api-access-bg679\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536688 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db48a3bd-546d-4f52-a9bc-340e03790730-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536770 4675 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c962c5e1-a244-4690-935e-9a7b0d5fc7e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:48 crc kubenswrapper[4675]: I0124 07:13:48.536843 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8d9\" (UniqueName: \"kubernetes.io/projected/db48a3bd-546d-4f52-a9bc-340e03790730-kube-api-access-cr8d9\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.055202 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.157998 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158047 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158176 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158246 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158302 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158335 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158372 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158400 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") pod \"5a50c14d-d518-492c-87d1-a194dc075c9f\" (UID: \"5a50c14d-d518-492c-87d1-a194dc075c9f\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.158682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.159395 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.160926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs" (OuterVolumeSpecName: "logs") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.213086 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts" (OuterVolumeSpecName: "scripts") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.221562 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.221824 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7" (OuterVolumeSpecName: "kube-api-access-4r2w7") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "kube-api-access-4r2w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261322 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261350 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2w7\" (UniqueName: \"kubernetes.io/projected/5a50c14d-d518-492c-87d1-a194dc075c9f-kube-api-access-4r2w7\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261360 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.261369 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a50c14d-d518-492c-87d1-a194dc075c9f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.280926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data" (OuterVolumeSpecName: "config-data") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.288998 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.319574 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.341390 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.351811 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-47cc-account-create-update-qbjjs" event={"ID":"f8458b8a-6770-4e62-9848-55a9b142cb8c","Type":"ContainerDied","Data":"8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.351873 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cef497a71f5e8c9e6023c3573d0db3b33448ea3ea9a25de79ae6e97e277f95f" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.352002 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-47cc-account-create-update-qbjjs" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369473 4675 generic.go:334] "Generic (PLEG): container finished" podID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" exitCode=0 Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369556 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a50c14d-d518-492c-87d1-a194dc075c9f","Type":"ContainerDied","Data":"3f56668cc86ccffe01283b05d57ef7e538fda6369f55106b029c24100a089c58"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369602 4675 scope.go:117] "RemoveContainer" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.369740 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370612 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370659 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.370670 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377180 4675 generic.go:334] "Generic (PLEG): container finished" podID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" exitCode=0 Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377235 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"918eda7b-6eff-4fb5-90d6-1b43a18787fb","Type":"ContainerDied","Data":"0c61230a10f189c874d0a49db9b6e2672e6d430929f425e547950a4f18245bc4"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.377311 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384509 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384884 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aab6-account-create-update-4zgt4" event={"ID":"cb102798-6f2c-4cf4-b697-03cc94f9174a","Type":"ContainerDied","Data":"32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.384950 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e95dec4f315af321abacd7d04a4ba3d82e40b4cceddf8fa638fdfe9564b7a4" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.393008 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a50c14d-d518-492c-87d1-a194dc075c9f" (UID: "5a50c14d-d518-492c-87d1-a194dc075c9f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397540 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" event={"ID":"db48a3bd-546d-4f52-a9bc-340e03790730","Type":"ContainerDied","Data":"d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397595 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fd9a81db61dcccc43a820e631e91369892f49d26e3b02c9f950de6edc3f9d7" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.397705 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e3e-account-create-update-z84p9" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427444 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2gcsv" event={"ID":"c962c5e1-a244-4690-935e-9a7b0d5fc7e4","Type":"ContainerDied","Data":"bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652"} Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427482 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc40f584741f9b08cd9119f4584722b1c051946412818cd64bd174bd95b9652" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.427557 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2gcsv" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.452210 4675 scope.go:117] "RemoveContainer" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475833 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475928 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.475971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476011 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476055 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476083 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476293 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") pod \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\" (UID: \"918eda7b-6eff-4fb5-90d6-1b43a18787fb\") " Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476557 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476835 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.476850 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a50c14d-d518-492c-87d1-a194dc075c9f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.477001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.493960 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts" (OuterVolumeSpecName: "scripts") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.498993 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792" (OuterVolumeSpecName: "kube-api-access-47792") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "kube-api-access-47792". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502130 4675 scope.go:117] "RemoveContainer" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.502769 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": container with ID starting with 82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b not found: ID does not exist" containerID="82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502795 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b"} err="failed to get container status \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": rpc error: code = NotFound desc = could not find container \"82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b\": container with ID starting with 82338f306b3369c798dc4b0d1c3b4d979578c4456ab7fc5462906cc1827c434b not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.502815 4675 scope.go:117] "RemoveContainer" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.506767 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": container with ID starting with 5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19 not found: ID does not exist" containerID="5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.506843 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19"} err="failed to get container status \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": rpc error: code = NotFound desc = could not find container \"5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19\": container with ID starting with 5ebe37d1791890a118fdf8904c8bfc4f8f7b977d3c53901fa603a9965b30aa19 not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.506882 4675 scope.go:117] "RemoveContainer" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.537231 4675 scope.go:117] "RemoveContainer" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.541272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.567511 4675 scope.go:117] "RemoveContainer" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.578153 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581488 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581511 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581521 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47792\" (UniqueName: \"kubernetes.io/projected/918eda7b-6eff-4fb5-90d6-1b43a18787fb-kube-api-access-47792\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581536 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.581545 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/918eda7b-6eff-4fb5-90d6-1b43a18787fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.593583 4675 scope.go:117] "RemoveContainer" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.611519 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data" (OuterVolumeSpecName: "config-data") pod "918eda7b-6eff-4fb5-90d6-1b43a18787fb" (UID: "918eda7b-6eff-4fb5-90d6-1b43a18787fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636435 4675 scope.go:117] "RemoveContainer" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.636797 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": container with ID starting with d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5 not found: ID does not exist" containerID="d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636830 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5"} err="failed to get container status \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": rpc error: code = NotFound desc = could not find container \"d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5\": container with ID starting with d04ef86b26063bf010e918f77e50410ea11052b8079bb2ff0240aa7c18fbc6b5 not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.636857 4675 scope.go:117] "RemoveContainer" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.637501 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": container with ID starting with 1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba not found: ID does not exist" containerID="1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.637522 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba"} err="failed to get container status \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": rpc error: code = NotFound desc = could not find container \"1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba\": container with ID starting with 1af919729b2c0b116c02f1e09b1069a40fefa53acc17014472464b59a03e19ba not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.637536 4675 scope.go:117] "RemoveContainer" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.637989 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": container with ID starting with e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d not found: ID does not exist" containerID="e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638011 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d"} err="failed to get container status \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": rpc error: code = NotFound desc = could not find container \"e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d\": container with ID starting with e20e101a2f2bf2e44954f19fb4a4da09815c36dc21812ff52d548035a68ce57d not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638025 4675 scope.go:117] "RemoveContainer" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.638347 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": container with ID starting with 87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c not found: ID does not exist" containerID="87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.638375 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c"} err="failed to get container status \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": rpc error: code = NotFound desc = could not find container \"87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c\": container with ID starting with 87c2b214fb58fc44ed4d5dfedfa698fe32bb0fd0bbf02f0a4c7ab394828b0f7c not found: ID does not exist" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.702743 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918eda7b-6eff-4fb5-90d6-1b43a18787fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.737216 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.745064 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.768173 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.783303 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.783992 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784039 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784057 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784063 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784080 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784087 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784099 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784121 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784127 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784141 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784147 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784158 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784166 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784179 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784185 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784202 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784222 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784228 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784245 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784251 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: E0124 07:13:49.784266 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784272 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784459 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-log" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784492 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784499 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784510 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-central-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784521 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784533 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="ceilometer-notification-agent" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784546 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" containerName="mariadb-account-create-update" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784559 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" containerName="glance-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="proxy-httpd" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784577 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" containerName="sg-core" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.784587 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" containerName="mariadb-database-create" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.785812 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.790739 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.791077 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.793373 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.809173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.840244 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.842616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.845585 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.845780 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.849661 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907170 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907226 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907245 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907352 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907379 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907424 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:49 crc kubenswrapper[4675]: I0124 07:13:49.907443 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009760 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009884 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.009908 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010463 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010612 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010751 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010860 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010917 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.010971 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011012 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011252 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011296 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011318 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.011440 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.012174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-logs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.012912 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.014260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.019188 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.022566 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.022933 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.036178 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7898\" (UniqueName: \"kubernetes.io/projected/d0a8fdf4-03fc-4962-8792-6f129d2b00e4-kube-api-access-w7898\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.053582 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d0a8fdf4-03fc-4962-8792-6f129d2b00e4\") " pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.104319 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.111556 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9292/healthcheck\": read tcp 10.217.0.2:60298->10.217.0.158:9292: read: connection reset by peer" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112013 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.158:9292/healthcheck\": read tcp 10.217.0.2:60284->10.217.0.158:9292: read: connection reset by peer" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112684 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112757 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112848 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112930 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.112960 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.113023 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.114083 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.114393 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.117462 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.121386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.132800 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.144948 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.158951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"ceilometer-0\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.170616 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.441264 4675 generic.go:334] "Generic (PLEG): container finished" podID="95652bba-0800-475e-9f2f-20e64195d523" containerID="92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" exitCode=0 Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.441332 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e"} Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.884168 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 24 07:13:50 crc kubenswrapper[4675]: W0124 07:13:50.888303 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0a8fdf4_03fc_4962_8792_6f129d2b00e4.slice/crio-41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c WatchSource:0}: Error finding container 41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c: Status 404 returned error can't find the container with id 41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.931559 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.957035 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a50c14d-d518-492c-87d1-a194dc075c9f" path="/var/lib/kubelet/pods/5a50c14d-d518-492c-87d1-a194dc075c9f/volumes" Jan 24 07:13:50 crc kubenswrapper[4675]: I0124 07:13:50.958326 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918eda7b-6eff-4fb5-90d6-1b43a18787fb" path="/var/lib/kubelet/pods/918eda7b-6eff-4fb5-90d6-1b43a18787fb/volumes" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.008476 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.044577 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045706 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045808 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045854 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.045987 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.046072 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.046114 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") pod \"95652bba-0800-475e-9f2f-20e64195d523\" (UID: \"95652bba-0800-475e-9f2f-20e64195d523\") " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.047183 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs" (OuterVolumeSpecName: "logs") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.047287 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.058054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts" (OuterVolumeSpecName: "scripts") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.058242 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht" (OuterVolumeSpecName: "kube-api-access-n2cht") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "kube-api-access-n2cht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.063461 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.113949 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.131616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data" (OuterVolumeSpecName: "config-data") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.141967 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149795 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149835 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149859 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149870 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2cht\" (UniqueName: \"kubernetes.io/projected/95652bba-0800-475e-9f2f-20e64195d523-kube-api-access-n2cht\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149880 4675 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95652bba-0800-475e-9f2f-20e64195d523-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149891 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.149908 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.173104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "95652bba-0800-475e-9f2f-20e64195d523" (UID: "95652bba-0800-475e-9f2f-20e64195d523"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.188063 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.252927 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95652bba-0800-475e-9f2f-20e64195d523-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.252985 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.459560 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"41042bd618c45b8d1cc463ed6317524093a7dcf710eafaae00febb6defeb756c"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.460762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"81db948dee5fd531f128023fa7c1f1df374b7bed408e07fff63892479d70b1a7"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463038 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95652bba-0800-475e-9f2f-20e64195d523","Type":"ContainerDied","Data":"e4fd29804bf1cbcfbb72dce66fcc9bef4e155c732e7dbfc8e6239baf96486755"} Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463137 4675 scope.go:117] "RemoveContainer" containerID="92544b6eaa03a277318c5550337cdf4977e7b316dcc14eeae1de3a44d092ab8e" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.463084 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.500987 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.523363 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.538701 4675 scope.go:117] "RemoveContainer" containerID="a03aacf34897e1379e2ae1229f088c65a86ab22a278aecd0a4ccc4cba6bdd994" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545166 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: E0124 07:13:51.545730 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545753 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: E0124 07:13:51.545778 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.545993 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-log" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.546026 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="95652bba-0800-475e-9f2f-20e64195d523" containerName="glance-httpd" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.547059 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.550415 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.550651 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.573072 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.661925 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.661992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662061 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662085 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662105 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662165 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.662421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764080 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764177 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764201 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764222 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764301 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764365 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.764983 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.765077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.765671 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61eafc8-f960-4335-8d26-2d47e8c7c039-logs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.772048 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.773042 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.779176 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.784051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d61eafc8-f960-4335-8d26-2d47e8c7c039-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.789771 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fx5j\" (UniqueName: \"kubernetes.io/projected/d61eafc8-f960-4335-8d26-2d47e8c7c039-kube-api-access-9fx5j\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.810994 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d61eafc8-f960-4335-8d26-2d47e8c7c039\") " pod="openstack/glance-default-internal-api-0" Jan 24 07:13:51 crc kubenswrapper[4675]: I0124 07:13:51.883593 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.481234 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"1440b0446c305524a6895876b8115f51034122782b72b9e9e4f478f123004c35"} Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.484501 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb"} Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.523262 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 24 07:13:52 crc kubenswrapper[4675]: I0124 07:13:52.963790 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95652bba-0800-475e-9f2f-20e64195d523" path="/var/lib/kubelet/pods/95652bba-0800-475e-9f2f-20e64195d523/volumes" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.498130 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"08f9eeaa23d0e7e1926b3bc91165e7fcb8440479b8a50e9af109c3b575c2b3c7"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.498514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"b542c766f3c42910335f6149d7bde13e61206d90785b0b2a1dd900cf727a5605"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.511073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d0a8fdf4-03fc-4962-8792-6f129d2b00e4","Type":"ContainerStarted","Data":"da325debef05abf0da1ee626fa35253467ba9e4e7d733e9ee2a3883834531783"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.521394 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da"} Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.548374 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.548360967 podStartE2EDuration="4.548360967s" podCreationTimestamp="2026-01-24 07:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:53.535593578 +0000 UTC m=+1234.831698801" watchObservedRunningTime="2026-01-24 07:13:53.548360967 +0000 UTC m=+1234.844466190" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.920827 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.921984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928566 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928579 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42gcl" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.928887 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 07:13:53 crc kubenswrapper[4675]: I0124 07:13:53.968252 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039408 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.039665 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.142470 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.142964 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.143108 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.143332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.165842 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.166895 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.172066 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.177255 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvg8g\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.239862 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.312346 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.312679 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.547256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d61eafc8-f960-4335-8d26-2d47e8c7c039","Type":"ContainerStarted","Data":"1e118acaa4b68aa576d4e0cb5719e0c7aa2309b01c312be3a57455a035969ad2"} Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.556575 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c"} Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.577521 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.577498221 podStartE2EDuration="3.577498221s" podCreationTimestamp="2026-01-24 07:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:13:54.563597064 +0000 UTC m=+1235.859702287" watchObservedRunningTime="2026-01-24 07:13:54.577498221 +0000 UTC m=+1235.873603444" Jan 24 07:13:54 crc kubenswrapper[4675]: I0124 07:13:54.768231 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.598508 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerStarted","Data":"63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32"} Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599028 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" containerID="cri-o://fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599370 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599678 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" containerID="cri-o://63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599809 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" containerID="cri-o://533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.599848 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" containerID="cri-o://f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" gracePeriod=30 Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.609162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerStarted","Data":"1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023"} Jan 24 07:13:55 crc kubenswrapper[4675]: I0124 07:13:55.623974 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.458562622 podStartE2EDuration="6.623956294s" podCreationTimestamp="2026-01-24 07:13:49 +0000 UTC" firstStartedPulling="2026-01-24 07:13:50.980107496 +0000 UTC m=+1232.276212719" lastFinishedPulling="2026-01-24 07:13:55.145501168 +0000 UTC m=+1236.441606391" observedRunningTime="2026-01-24 07:13:55.621656629 +0000 UTC m=+1236.917761852" watchObservedRunningTime="2026-01-24 07:13:55.623956294 +0000 UTC m=+1236.920061527" Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620621 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" exitCode=0 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620939 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" exitCode=2 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620947 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" exitCode=0 Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620828 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32"} Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620976 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c"} Jan 24 07:13:56 crc kubenswrapper[4675]: I0124 07:13:56.620986 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da"} Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.108363 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.109095 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.152056 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.165675 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.664285 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:14:00 crc kubenswrapper[4675]: I0124 07:14:00.664475 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.902591 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.903088 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.942286 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:01 crc kubenswrapper[4675]: I0124 07:14:01.979951 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.679689 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.679733 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.681002 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:02 crc kubenswrapper[4675]: I0124 07:14:02.681060 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.229700 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.599424 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.718452 4675 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerID="fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" exitCode=0 Jan 24 07:14:03 crc kubenswrapper[4675]: I0124 07:14:03.722406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb"} Jan 24 07:14:04 crc kubenswrapper[4675]: I0124 07:14:04.312655 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:14:05 crc kubenswrapper[4675]: I0124 07:14:05.752008 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:05 crc kubenswrapper[4675]: I0124 07:14:05.752291 4675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 07:14:06 crc kubenswrapper[4675]: I0124 07:14:06.341014 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.133628 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216670 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216731 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216906 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217525 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.216935 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.217746 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") pod \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\" (UID: \"5c6dd852-74ce-4b09-b7db-9ea8618ecab8\") " Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.218358 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.218371 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.233512 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts" (OuterVolumeSpecName: "scripts") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.244015 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8" (OuterVolumeSpecName: "kube-api-access-zpsh8") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "kube-api-access-zpsh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.270240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324159 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324191 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpsh8\" (UniqueName: \"kubernetes.io/projected/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-kube-api-access-zpsh8\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.324201 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.355655 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.414483 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data" (OuterVolumeSpecName: "config-data") pod "5c6dd852-74ce-4b09-b7db-9ea8618ecab8" (UID: "5c6dd852-74ce-4b09-b7db-9ea8618ecab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.425522 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.425570 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd852-74ce-4b09-b7db-9ea8618ecab8-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.766844 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerStarted","Data":"692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1"} Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771231 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6dd852-74ce-4b09-b7db-9ea8618ecab8","Type":"ContainerDied","Data":"81db948dee5fd531f128023fa7c1f1df374b7bed408e07fff63892479d70b1a7"} Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771283 4675 scope.go:117] "RemoveContainer" containerID="63a5ceae15c00468b72f0e7ce7ce829a62036f711da9d244e81a84d9b9ccde32" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.771445 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.786466 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" podStartSLOduration=2.591023916 podStartE2EDuration="14.786443693s" podCreationTimestamp="2026-01-24 07:13:53 +0000 UTC" firstStartedPulling="2026-01-24 07:13:54.754594584 +0000 UTC m=+1236.050699807" lastFinishedPulling="2026-01-24 07:14:06.950014361 +0000 UTC m=+1248.246119584" observedRunningTime="2026-01-24 07:14:07.784986418 +0000 UTC m=+1249.081091641" watchObservedRunningTime="2026-01-24 07:14:07.786443693 +0000 UTC m=+1249.082548916" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.803058 4675 scope.go:117] "RemoveContainer" containerID="533cfcafa7099ae8ad07bfe72a3c3fa2a275562290c2a449a880e413dbbff37c" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.820335 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.847453 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.848608 4675 scope.go:117] "RemoveContainer" containerID="f76537d2c23f9f75a2b333632e88cd2ff39df94b4f83529627b385a01b6c09da" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.867449 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.867975 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.867997 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868035 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868043 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868062 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868070 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: E0124 07:14:07.868096 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868105 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868312 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-notification-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868339 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="sg-core" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868359 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="ceilometer-central-agent" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.868374 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" containerName="proxy-httpd" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.870463 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.876696 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.885183 4675 scope.go:117] "RemoveContainer" containerID="fcf737f104b20f78a351793d0ea94c36f05fb857e02ce7c2d27ccc8c4ddfb7bb" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.885917 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.909342 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.938899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.938962 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939076 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939111 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939147 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939187 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:07 crc kubenswrapper[4675]: I0124 07:14:07.939213 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.040817 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041795 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041851 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041904 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041992 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.042018 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.041737 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.043960 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.048956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.049336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.050285 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.068194 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.093609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"ceilometer-0\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.202589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.748624 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.783148 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"9de865949fb8da851d910199e62bdf7d8f635e9c18af4381c27697a109e5dc7e"} Jan 24 07:14:08 crc kubenswrapper[4675]: I0124 07:14:08.952926 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6dd852-74ce-4b09-b7db-9ea8618ecab8" path="/var/lib/kubelet/pods/5c6dd852-74ce-4b09-b7db-9ea8618ecab8/volumes" Jan 24 07:14:09 crc kubenswrapper[4675]: I0124 07:14:09.796407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895"} Jan 24 07:14:10 crc kubenswrapper[4675]: I0124 07:14:10.819100 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9"} Jan 24 07:14:11 crc kubenswrapper[4675]: I0124 07:14:11.500633 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:11 crc kubenswrapper[4675]: I0124 07:14:11.829147 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2"} Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.839902 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerStarted","Data":"6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37"} Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840039 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" containerID="cri-o://a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840302 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" containerID="cri-o://19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840332 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" containerID="cri-o://6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840379 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.840303 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" containerID="cri-o://3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" gracePeriod=30 Jan 24 07:14:12 crc kubenswrapper[4675]: I0124 07:14:12.860581 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2228184779999998 podStartE2EDuration="5.860564118s" podCreationTimestamp="2026-01-24 07:14:07 +0000 UTC" firstStartedPulling="2026-01-24 07:14:08.759960439 +0000 UTC m=+1250.056065662" lastFinishedPulling="2026-01-24 07:14:12.397706079 +0000 UTC m=+1253.693811302" observedRunningTime="2026-01-24 07:14:12.859923533 +0000 UTC m=+1254.156028756" watchObservedRunningTime="2026-01-24 07:14:12.860564118 +0000 UTC m=+1254.156669341" Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850033 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" exitCode=0 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850327 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" exitCode=2 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.850338 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" exitCode=0 Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851048 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37"} Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851093 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2"} Jan 24 07:14:13 crc kubenswrapper[4675]: I0124 07:14:13.851103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9"} Jan 24 07:14:14 crc kubenswrapper[4675]: I0124 07:14:14.312977 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-656ff794dd-jx8ld" podUID="4b7e7730-0a42-48b0-bb7e-da95eb915126" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 24 07:14:15 crc kubenswrapper[4675]: I0124 07:14:15.877433 4675 generic.go:334] "Generic (PLEG): container finished" podID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerID="a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" exitCode=0 Jan 24 07:14:15 crc kubenswrapper[4675]: I0124 07:14:15.877525 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895"} Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.023266 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.088208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.088635 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089004 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089490 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089778 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.089961 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.090115 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") pod \"7e393df8-0787-4f26-a453-f7c9f27e91fc\" (UID: \"7e393df8-0787-4f26-a453-f7c9f27e91fc\") " Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.090319 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.091270 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.091398 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e393df8-0787-4f26-a453-f7c9f27e91fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.096027 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6" (OuterVolumeSpecName: "kube-api-access-5j2z6") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "kube-api-access-5j2z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.104391 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts" (OuterVolumeSpecName: "scripts") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.118711 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.173857 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193557 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193588 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j2z6\" (UniqueName: \"kubernetes.io/projected/7e393df8-0787-4f26-a453-f7c9f27e91fc-kube-api-access-5j2z6\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193600 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.193612 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.202322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data" (OuterVolumeSpecName: "config-data") pod "7e393df8-0787-4f26-a453-f7c9f27e91fc" (UID: "7e393df8-0787-4f26-a453-f7c9f27e91fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.294996 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e393df8-0787-4f26-a453-f7c9f27e91fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.892443 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e393df8-0787-4f26-a453-f7c9f27e91fc","Type":"ContainerDied","Data":"9de865949fb8da851d910199e62bdf7d8f635e9c18af4381c27697a109e5dc7e"} Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.893646 4675 scope.go:117] "RemoveContainer" containerID="6bf514ff8477cbc84448330c0f6bbe93f42113742f43f06d15a8b9195d342f37" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.892526 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.925320 4675 scope.go:117] "RemoveContainer" containerID="3db305910bfb883f2b1bb85ed7214811680a2995f50a0005be53ef0ba5f043b2" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.963537 4675 scope.go:117] "RemoveContainer" containerID="19812846ac3208cccefd37580a4509d8bdce481219c264aed5b499dbce0110e9" Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.980225 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:16 crc kubenswrapper[4675]: I0124 07:14:16.996684 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.009838 4675 scope.go:117] "RemoveContainer" containerID="a92be86923b70a195053cf52494c1a3c5825dddfdcafc17c2cb726559a7f3895" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.033924 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034310 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034323 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034341 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034347 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034368 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034374 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: E0124 07:14:17.034384 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034390 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034569 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="proxy-httpd" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034580 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="sg-core" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034591 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-notification-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.034599 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" containerName="ceilometer-central-agent" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.036539 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.039967 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.040058 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.044132 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213839 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.213910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214155 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.214405 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316002 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316439 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.316970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317206 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.317504 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.318230 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.318389 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.323699 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.324598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.331861 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.336283 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.337261 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"ceilometer-0\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.360768 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.830207 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:17 crc kubenswrapper[4675]: I0124 07:14:17.901276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"588244b07f5a60e0cabab824de73b0c1ab641046dedb0b1f0652661018ee56f9"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.912867 4675 generic.go:334] "Generic (PLEG): container finished" podID="827f33c6-ea9f-4312-9533-e952a218f464" containerID="692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1" exitCode=0 Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.912920 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerDied","Data":"692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.917548 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} Jan 24 07:14:18 crc kubenswrapper[4675]: I0124 07:14:18.961594 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e393df8-0787-4f26-a453-f7c9f27e91fc" path="/var/lib/kubelet/pods/7e393df8-0787-4f26-a453-f7c9f27e91fc/volumes" Jan 24 07:14:19 crc kubenswrapper[4675]: I0124 07:14:19.928877 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.322418 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477028 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477134 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.477177 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") pod \"827f33c6-ea9f-4312-9533-e952a218f464\" (UID: \"827f33c6-ea9f-4312-9533-e952a218f464\") " Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.500049 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s" (OuterVolumeSpecName: "kube-api-access-5rn4s") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "kube-api-access-5rn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.501422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts" (OuterVolumeSpecName: "scripts") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.506864 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data" (OuterVolumeSpecName: "config-data") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.514077 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "827f33c6-ea9f-4312-9533-e952a218f464" (UID: "827f33c6-ea9f-4312-9533-e952a218f464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580903 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580956 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580967 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/827f33c6-ea9f-4312-9533-e952a218f464-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.580976 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rn4s\" (UniqueName: \"kubernetes.io/projected/827f33c6-ea9f-4312-9533-e952a218f464-kube-api-access-5rn4s\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.941200 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" event={"ID":"827f33c6-ea9f-4312-9533-e952a218f464","Type":"ContainerDied","Data":"1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023"} Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.941247 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f30006b6a3b95bebf5a27838f6aeb1a6be45660ab015b24f0aa23631999e023" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.946793 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvg8g" Jan 24 07:14:20 crc kubenswrapper[4675]: I0124 07:14:20.959905 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042241 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: E0124 07:14:21.042701 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042740 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.042969 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="827f33c6-ea9f-4312-9533-e952a218f464" containerName="nova-cell0-conductor-db-sync" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.043667 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.046645 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42gcl" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.047011 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.061287 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090360 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090468 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.090536 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192460 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192556 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.192634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.198050 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.206509 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a43606-cba1-4fca-93c4-a1937ee449cc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.211427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdsqs\" (UniqueName: \"kubernetes.io/projected/a3a43606-cba1-4fca-93c4-a1937ee449cc-kube-api-access-qdsqs\") pod \"nova-cell0-conductor-0\" (UID: \"a3a43606-cba1-4fca-93c4-a1937ee449cc\") " pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.361670 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.893390 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.972630 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerStarted","Data":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.973634 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:14:21 crc kubenswrapper[4675]: I0124 07:14:21.978000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a3a43606-cba1-4fca-93c4-a1937ee449cc","Type":"ContainerStarted","Data":"db1d3242c060eae2d124e04f7f8abe6aeb495bd394d21a072602d563d4cdf1d7"} Jan 24 07:14:22 crc kubenswrapper[4675]: I0124 07:14:22.006290 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.432710802 podStartE2EDuration="6.006268937s" podCreationTimestamp="2026-01-24 07:14:16 +0000 UTC" firstStartedPulling="2026-01-24 07:14:17.822916903 +0000 UTC m=+1259.119022126" lastFinishedPulling="2026-01-24 07:14:21.396475038 +0000 UTC m=+1262.692580261" observedRunningTime="2026-01-24 07:14:21.989788798 +0000 UTC m=+1263.285894021" watchObservedRunningTime="2026-01-24 07:14:22.006268937 +0000 UTC m=+1263.302374160" Jan 24 07:14:22 crc kubenswrapper[4675]: I0124 07:14:22.999700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a3a43606-cba1-4fca-93c4-a1937ee449cc","Type":"ContainerStarted","Data":"60cd9e64b26927d2508b43e3b4c824146763a8d648e5dc2796029d66fc1099fe"} Jan 24 07:14:23 crc kubenswrapper[4675]: I0124 07:14:23.021058 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.021041273 podStartE2EDuration="2.021041273s" podCreationTimestamp="2026-01-24 07:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:23.013384997 +0000 UTC m=+1264.309490220" watchObservedRunningTime="2026-01-24 07:14:23.021041273 +0000 UTC m=+1264.317146496" Jan 24 07:14:24 crc kubenswrapper[4675]: I0124 07:14:24.007741 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:26 crc kubenswrapper[4675]: I0124 07:14:26.191685 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.779768 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-656ff794dd-jx8ld" Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838150 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838692 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" containerID="cri-o://1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" gracePeriod=30 Jan 24 07:14:27 crc kubenswrapper[4675]: I0124 07:14:27.838828 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" containerID="cri-o://32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" gracePeriod=30 Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.077774 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.077708 4675 generic.go:334] "Generic (PLEG): container finished" podID="6462a086-070a-4998-8a59-cb4ccbf19867" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" exitCode=0 Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.396464 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.969575 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.971508 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.973334 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.981288 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 24 07:14:31 crc kubenswrapper[4675]: I0124 07:14:31.986444 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.019982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020109 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020184 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.020261 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.126925 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127270 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.127370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.136796 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.144526 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.146847 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.166469 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"nova-cell0-cell-mapping-5bzdh\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.209642 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.210923 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.221605 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.243966 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.245681 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.261536 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.271522 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.291505 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.316336 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336432 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336532 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.336596 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.361206 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.362477 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.374184 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.384238 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439394 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439446 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439470 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439494 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439509 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439535 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439557 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439600 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439625 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.439671 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.452361 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.453554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.478301 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.479937 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.488687 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"nova-scheduler-0\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.506452 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.507728 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.541954 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542145 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542209 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542262 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.542790 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.547051 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.555769 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.561359 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.562497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.570578 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.599488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"nova-cell1-novncproxy-0\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.619746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"nova-api-0\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " pod="openstack/nova-api-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646240 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646357 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.646386 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.667031 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.668548 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.684550 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.689708 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752383 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752424 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752462 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752696 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752739 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752762 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752780 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.752800 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.753688 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.766320 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.768773 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.785512 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"nova-metadata-0\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854390 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.854411 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.855507 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856596 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856898 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.856897 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.857406 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.871164 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.881440 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"dnsmasq-dns-bccf8f775-9mbx2\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:32 crc kubenswrapper[4675]: I0124 07:14:32.887738 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.006679 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.276888 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.450100 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.647553 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.664952 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda382715e_bef1_47d2_872f_21ffbda9df32.slice/crio-738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc WatchSource:0}: Error finding container 738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc: Status 404 returned error can't find the container with id 738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.746586 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.758327 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea2a5b84_4fe4_4a8c_8009_2d63a5faec3d.slice/crio-ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff WatchSource:0}: Error finding container ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff: Status 404 returned error can't find the container with id ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.855278 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:14:33 crc kubenswrapper[4675]: W0124 07:14:33.859324 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9bf7666_9ba5_43db_a358_1a2df0e0b118.slice/crio-0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb WatchSource:0}: Error finding container 0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb: Status 404 returned error can't find the container with id 0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.985181 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.986680 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.989438 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 24 07:14:33 crc kubenswrapper[4675]: I0124 07:14:33.990332 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 07:14:34 crc kubenswrapper[4675]: W0124 07:14:34.017618 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62846c05_d38a_49de_8303_468e98254357.slice/crio-95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a WatchSource:0}: Error finding container 95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a: Status 404 returned error can't find the container with id 95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.019172 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.056845 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.110827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.110956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.111029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.111127 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.130632 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.132999 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.136529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerStarted","Data":"738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.139682 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerStarted","Data":"2cdb85f9986f03860143bd1424df78b4a24959bedd905d66093742accf17b4a5"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.142757 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.142794 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.148486 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.152601 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerStarted","Data":"78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.152651 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerStarted","Data":"8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a"} Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.189212 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5bzdh" podStartSLOduration=3.189188613 podStartE2EDuration="3.189188613s" podCreationTimestamp="2026-01-24 07:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:34.186195721 +0000 UTC m=+1275.482300954" watchObservedRunningTime="2026-01-24 07:14:34.189188613 +0000 UTC m=+1275.485293836" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214219 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214289 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214331 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.214367 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.220257 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.223632 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.226193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.237479 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"nova-cell1-conductor-db-sync-t2bwz\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.325340 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:34 crc kubenswrapper[4675]: I0124 07:14:34.984560 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:14:35 crc kubenswrapper[4675]: W0124 07:14:35.001326 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1819bfe_22cc_4ead_8e81_717ee70b2e83.slice/crio-d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056 WatchSource:0}: Error finding container d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056: Status 404 returned error can't find the container with id d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056 Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.166829 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerStarted","Data":"d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178450 4675 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerID="7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566" exitCode=0 Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178875 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerStarted","Data":"c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7"} Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.178983 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:35 crc kubenswrapper[4675]: I0124 07:14:35.199017 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" podStartSLOduration=3.198999368 podStartE2EDuration="3.198999368s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:35.194505719 +0000 UTC m=+1276.490610942" watchObservedRunningTime="2026-01-24 07:14:35.198999368 +0000 UTC m=+1276.495104591" Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.001247 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.030074 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:14:36 crc kubenswrapper[4675]: I0124 07:14:36.199480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerStarted","Data":"28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af"} Jan 24 07:14:38 crc kubenswrapper[4675]: I0124 07:14:38.985024 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" podStartSLOduration=5.985007813 podStartE2EDuration="5.985007813s" podCreationTimestamp="2026-01-24 07:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:36.222429034 +0000 UTC m=+1277.518534257" watchObservedRunningTime="2026-01-24 07:14:38.985007813 +0000 UTC m=+1280.281113036" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.241374 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.241430 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerStarted","Data":"224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244216 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerStarted","Data":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244409 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" containerID="cri-o://54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.244574 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" containerID="cri-o://ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.266180 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerStarted","Data":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.266575 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" gracePeriod=30 Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.270992 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerStarted","Data":"297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d"} Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.304080 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.244932922 podStartE2EDuration="7.304057315s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:34.026256334 +0000 UTC m=+1275.322361557" lastFinishedPulling="2026-01-24 07:14:38.085380727 +0000 UTC m=+1279.381485950" observedRunningTime="2026-01-24 07:14:39.266141876 +0000 UTC m=+1280.562247099" watchObservedRunningTime="2026-01-24 07:14:39.304057315 +0000 UTC m=+1280.600162528" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.340847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.040276023 podStartE2EDuration="7.340824707s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.766835057 +0000 UTC m=+1275.062940280" lastFinishedPulling="2026-01-24 07:14:38.067383731 +0000 UTC m=+1279.363488964" observedRunningTime="2026-01-24 07:14:39.287054713 +0000 UTC m=+1280.583159936" watchObservedRunningTime="2026-01-24 07:14:39.340824707 +0000 UTC m=+1280.636929930" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.342431 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.94163684 podStartE2EDuration="7.342420125s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.666576626 +0000 UTC m=+1274.962681849" lastFinishedPulling="2026-01-24 07:14:38.067359911 +0000 UTC m=+1279.363465134" observedRunningTime="2026-01-24 07:14:39.311184907 +0000 UTC m=+1280.607290130" watchObservedRunningTime="2026-01-24 07:14:39.342420125 +0000 UTC m=+1280.638525348" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.367009 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.838228634 podStartE2EDuration="7.36698995s" podCreationTimestamp="2026-01-24 07:14:32 +0000 UTC" firstStartedPulling="2026-01-24 07:14:33.537333074 +0000 UTC m=+1274.833438297" lastFinishedPulling="2026-01-24 07:14:38.06609439 +0000 UTC m=+1279.362199613" observedRunningTime="2026-01-24 07:14:39.324325207 +0000 UTC m=+1280.620430440" watchObservedRunningTime="2026-01-24 07:14:39.36698995 +0000 UTC m=+1280.663095173" Jan 24 07:14:39 crc kubenswrapper[4675]: I0124 07:14:39.896080 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043794 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.043991 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") pod \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\" (UID: \"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d\") " Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.046066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs" (OuterVolumeSpecName: "logs") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.049191 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc" (OuterVolumeSpecName: "kube-api-access-z9wtc") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "kube-api-access-z9wtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.075034 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.081450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data" (OuterVolumeSpecName: "config-data") pod "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" (UID: "ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146686 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146739 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9wtc\" (UniqueName: \"kubernetes.io/projected/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-kube-api-access-z9wtc\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146751 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.146760 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.280286 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" exitCode=0 Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.280327 4675 generic.go:334] "Generic (PLEG): container finished" podID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" exitCode=143 Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.281109 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.285931 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286267 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d","Type":"ContainerDied","Data":"ecf037d65b10317cfeddb23c76ce24134159fa39c202d51456c959edfdbe96ff"} Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.286289 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.321929 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.331570 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.346687 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353143 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.353699 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353756 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} err="failed to get container status \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.353780 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.356744 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.356781 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} err="failed to get container status \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.356809 4675 scope.go:117] "RemoveContainer" containerID="ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358414 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7"} err="failed to get container status \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": rpc error: code = NotFound desc = could not find container \"ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7\": container with ID starting with ebe8ac40becc46721eb3dc9be0b33512562354c015836d6ad65735568d463bb7 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358458 4675 scope.go:117] "RemoveContainer" containerID="54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.358850 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877"} err="failed to get container status \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": rpc error: code = NotFound desc = could not find container \"54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877\": container with ID starting with 54d6bdef81cd542e089db24fb729f47a7f0124f0a6cb925f3293a8aca6b2e877 not found: ID does not exist" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.373645 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.374212 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374231 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: E0124 07:14:40.374269 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374276 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374501 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-log" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.374516 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" containerName="nova-metadata-metadata" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.375787 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.397365 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.397491 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.411470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.553751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.553795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554123 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.554225 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656284 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656403 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.656491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.657217 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.661678 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.662951 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.671357 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.691260 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"nova-metadata-0\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.717790 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:40 crc kubenswrapper[4675]: I0124 07:14:40.958124 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d" path="/var/lib/kubelet/pods/ea2a5b84-4fe4-4a8c-8009-2d63a5faec3d/volumes" Jan 24 07:14:41 crc kubenswrapper[4675]: I0124 07:14:41.234342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:41 crc kubenswrapper[4675]: W0124 07:14:41.239305 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422994aa_4835_4b8e_bc15_ea6e636ffa7f.slice/crio-24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b WatchSource:0}: Error finding container 24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b: Status 404 returned error can't find the container with id 24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b Jan 24 07:14:41 crc kubenswrapper[4675]: I0124 07:14:41.292585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.304120 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.304522 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerStarted","Data":"604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9"} Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.332929 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.332908167 podStartE2EDuration="2.332908167s" podCreationTimestamp="2026-01-24 07:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:42.322856313 +0000 UTC m=+1283.618961556" watchObservedRunningTime="2026-01-24 07:14:42.332908167 +0000 UTC m=+1283.629013390" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.558693 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.558775 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.585546 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.690316 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.888206 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:14:42 crc kubenswrapper[4675]: I0124 07:14:42.888283 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.008952 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.097373 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.097609 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" containerID="cri-o://80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" gracePeriod=10 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.313385 4675 generic.go:334] "Generic (PLEG): container finished" podID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerID="80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" exitCode=0 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.313428 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5"} Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.315993 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerID="78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24" exitCode=0 Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.316065 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerDied","Data":"78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24"} Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.367107 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.729160 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.814598 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.814685 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815542 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815821 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815845 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.815947 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") pod \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\" (UID: \"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b\") " Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.828059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj" (OuterVolumeSpecName: "kube-api-access-hsndj") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "kube-api-access-hsndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.888891 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.902606 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.905505 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919309 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919351 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsndj\" (UniqueName: \"kubernetes.io/projected/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-kube-api-access-hsndj\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.919363 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.922104 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config" (OuterVolumeSpecName: "config") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.931925 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.944068 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:43 crc kubenswrapper[4675]: I0124 07:14:43.957607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" (UID: "b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020594 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020628 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.020638 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.148820 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326597 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" event={"ID":"b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b","Type":"ContainerDied","Data":"40d5e03d905e545c8ea9211fa30ab877f25abe89a569e93e4fe8d108c5f0d55a"} Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326639 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g9hmc" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.326685 4675 scope.go:117] "RemoveContainer" containerID="80eec3e9dcbbc1cd44130b29a91f156fcae83d34ac57cf18dfb9d0209ee3b6b5" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.329022 4675 generic.go:334] "Generic (PLEG): container finished" podID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerID="28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af" exitCode=0 Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.329189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerDied","Data":"28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af"} Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.389150 4675 scope.go:117] "RemoveContainer" containerID="e5be4111b174b0893302ec72491db10290b0324936ef7df715e08fe66a0569cc" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.389165 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.399908 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g9hmc"] Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.743607 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837301 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837452 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.837491 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") pod \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\" (UID: \"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284\") " Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.843425 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts" (OuterVolumeSpecName: "scripts") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.844650 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp" (OuterVolumeSpecName: "kube-api-access-gpbnp") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "kube-api-access-gpbnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.867456 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.875344 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data" (OuterVolumeSpecName: "config-data") pod "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" (UID: "5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.939852 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.939890 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpbnp\" (UniqueName: \"kubernetes.io/projected/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-kube-api-access-gpbnp\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.940104 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.940117 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:44 crc kubenswrapper[4675]: I0124 07:14:44.959245 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" path="/var/lib/kubelet/pods/b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b/volumes" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344126 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5bzdh" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344249 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5bzdh" event={"ID":"5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284","Type":"ContainerDied","Data":"8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a"} Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.344289 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba554aa39535408d6839b491d0293ee2d7ef9fe4bc35c53280c69ac8fd7419a" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560372 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560872 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" containerID="cri-o://224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.560955 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" containerID="cri-o://80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.581679 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.581864 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" containerID="cri-o://297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607060 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607273 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" containerID="cri-o://604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.607753 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" containerID="cri-o://9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" gracePeriod=30 Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.721839 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.721903 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.892502 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961777 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.961933 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.962021 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") pod \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\" (UID: \"a1819bfe-22cc-4ead-8e81-717ee70b2e83\") " Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.965578 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts" (OuterVolumeSpecName: "scripts") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.969896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx" (OuterVolumeSpecName: "kube-api-access-x7rgx") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "kube-api-access-x7rgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.987589 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data" (OuterVolumeSpecName: "config-data") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:45 crc kubenswrapper[4675]: I0124 07:14:45.988962 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1819bfe-22cc-4ead-8e81-717ee70b2e83" (UID: "a1819bfe-22cc-4ead-8e81-717ee70b2e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064677 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064758 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064769 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rgx\" (UniqueName: \"kubernetes.io/projected/a1819bfe-22cc-4ead-8e81-717ee70b2e83-kube-api-access-x7rgx\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.064778 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1819bfe-22cc-4ead-8e81-717ee70b2e83-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.354890 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" event={"ID":"a1819bfe-22cc-4ead-8e81-717ee70b2e83","Type":"ContainerDied","Data":"d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.355197 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3851889a5cd9dde444f8479f3ed9c32e184c96dd7febac892827c7115ef2056" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.354912 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t2bwz" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.365964 4675 generic.go:334] "Generic (PLEG): container finished" podID="d8e802ff-b559-4ef9-9826-708faf39b488" containerID="297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" exitCode=0 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.366058 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerDied","Data":"297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.373383 4675 generic.go:334] "Generic (PLEG): container finished" podID="62846c05-d38a-49de-8303-468e98254357" containerID="224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" exitCode=143 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.373450 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377697 4675 generic.go:334] "Generic (PLEG): container finished" podID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerID="9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" exitCode=0 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377749 4675 generic.go:334] "Generic (PLEG): container finished" podID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerID="604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" exitCode=143 Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377773 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.377798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9"} Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.442872 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443281 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443295 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443313 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443319 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443332 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="init" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="init" Jan 24 07:14:46 crc kubenswrapper[4675]: E0124 07:14:46.443353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443359 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443514 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" containerName="nova-manage" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443542 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" containerName="nova-cell1-conductor-db-sync" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.443550 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d870cc-b2a9-442f-9779-bf9fbeb8ce2b" containerName="dnsmasq-dns" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.444125 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.446756 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.453470 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573077 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.573317 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675054 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675185 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.675229 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.681943 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.691938 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.698113 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvl4w\" (UniqueName: \"kubernetes.io/projected/8afe3d83-5678-47e9-be7d-dfbf50fa5bc9-kube-api-access-hvl4w\") pod \"nova-cell1-conductor-0\" (UID: \"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9\") " pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.768370 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.770761 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.776684 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877407 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877526 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877642 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") pod \"d8e802ff-b559-4ef9-9826-708faf39b488\" (UID: \"d8e802ff-b559-4ef9-9826-708faf39b488\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877667 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877699 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.877820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") pod \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\" (UID: \"422994aa-4835-4b8e-bc15-ea6e636ffa7f\") " Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.880680 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs" (OuterVolumeSpecName: "logs") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.883384 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz" (OuterVolumeSpecName: "kube-api-access-8lbmz") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "kube-api-access-8lbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.907312 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg" (OuterVolumeSpecName: "kube-api-access-lh5zg") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "kube-api-access-lh5zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.924986 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.930811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.937898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data" (OuterVolumeSpecName: "config-data") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.954397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data" (OuterVolumeSpecName: "config-data") pod "d8e802ff-b559-4ef9-9826-708faf39b488" (UID: "d8e802ff-b559-4ef9-9826-708faf39b488"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.962459 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "422994aa-4835-4b8e-bc15-ea6e636ffa7f" (UID: "422994aa-4835-4b8e-bc15-ea6e636ffa7f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982109 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/422994aa-4835-4b8e-bc15-ea6e636ffa7f-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982226 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982320 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982380 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422994aa-4835-4b8e-bc15-ea6e636ffa7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982435 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh5zg\" (UniqueName: \"kubernetes.io/projected/d8e802ff-b559-4ef9-9826-708faf39b488-kube-api-access-lh5zg\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982545 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982605 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lbmz\" (UniqueName: \"kubernetes.io/projected/422994aa-4835-4b8e-bc15-ea6e636ffa7f-kube-api-access-8lbmz\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:46 crc kubenswrapper[4675]: I0124 07:14:46.982667 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8e802ff-b559-4ef9-9826-708faf39b488-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.258606 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: W0124 07:14:47.260295 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8afe3d83_5678_47e9_be7d_dfbf50fa5bc9.slice/crio-6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee WatchSource:0}: Error finding container 6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee: Status 404 returned error can't find the container with id 6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.371811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.396256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9","Type":"ContainerStarted","Data":"6713ecdcb75cf2fecbada736f12deca6abec995ff5680b5591ca3fa886028fee"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424149 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"422994aa-4835-4b8e-bc15-ea6e636ffa7f","Type":"ContainerDied","Data":"24367b6cdd690c92cd23c350b478002fb6c059c715c154bab90f36addc6d5c2b"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424210 4675 scope.go:117] "RemoveContainer" containerID="9c00edb5fc0391611ce0b09014e6e6283b119256783e2c40dd2a51a58d34b102" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.424387 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.433772 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8e802ff-b559-4ef9-9826-708faf39b488","Type":"ContainerDied","Data":"2cdb85f9986f03860143bd1424df78b4a24959bedd905d66093742accf17b4a5"} Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.433969 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.465788 4675 scope.go:117] "RemoveContainer" containerID="604a83d4a0748a805ac99fbbb18ec1512a8c5b523f97483e353bcf4a2cdd04c9" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.492474 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.520196 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.523982 4675 scope.go:117] "RemoveContainer" containerID="297c27eaec941202545a9b1c1221cdfb969619ef7e96bb0f2b060bef18a2b54d" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.545967 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.557937 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558298 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558329 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558363 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558370 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: E0124 07:14:47.558382 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558388 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558556 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-metadata" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558576 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" containerName="nova-scheduler-scheduler" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.558590 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" containerName="nova-metadata-log" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.559312 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.564568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.570238 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.581670 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.594041 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.595589 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.598111 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.598292 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.607818 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715693 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715744 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715784 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715813 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715830 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715870 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.715947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817149 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817196 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817244 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817304 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817372 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.817421 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.818660 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.821155 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.821937 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.822168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.823256 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.824121 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.834654 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"nova-metadata-0\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " pod="openstack/nova-metadata-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.837041 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"nova-scheduler-0\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.909504 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:14:47 crc kubenswrapper[4675]: I0124 07:14:47.928283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.416669 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.426024 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:14:48 crc kubenswrapper[4675]: W0124 07:14:48.428414 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c6c830_f77b_47f7_a874_02324d6c8c39.slice/crio-e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b WatchSource:0}: Error finding container e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b: Status 404 returned error can't find the container with id e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b Jan 24 07:14:48 crc kubenswrapper[4675]: W0124 07:14:48.433858 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc3534ca_1196_47a7_889c_cead596f7636.slice/crio-c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048 WatchSource:0}: Error finding container c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048: Status 404 returned error can't find the container with id c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048 Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.447301 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerStarted","Data":"e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b"} Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.455431 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8afe3d83-5678-47e9-be7d-dfbf50fa5bc9","Type":"ContainerStarted","Data":"7ed7eb9419f7e3c81f3bb1aa6d91f51b28305f96a5c0360a78d7280047efdd4a"} Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.455681 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.477165 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.477148078 podStartE2EDuration="2.477148078s" podCreationTimestamp="2026-01-24 07:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:48.467499495 +0000 UTC m=+1289.763604718" watchObservedRunningTime="2026-01-24 07:14:48.477148078 +0000 UTC m=+1289.773253301" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.955986 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422994aa-4835-4b8e-bc15-ea6e636ffa7f" path="/var/lib/kubelet/pods/422994aa-4835-4b8e-bc15-ea6e636ffa7f/volumes" Jan 24 07:14:48 crc kubenswrapper[4675]: I0124 07:14:48.956819 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e802ff-b559-4ef9-9826-708faf39b488" path="/var/lib/kubelet/pods/d8e802ff-b559-4ef9-9826-708faf39b488/volumes" Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.491362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerStarted","Data":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497511 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.497527 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerStarted","Data":"c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048"} Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.517547 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.517526875 podStartE2EDuration="2.517526875s" podCreationTimestamp="2026-01-24 07:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:49.513254241 +0000 UTC m=+1290.809359464" watchObservedRunningTime="2026-01-24 07:14:49.517526875 +0000 UTC m=+1290.813632098" Jan 24 07:14:49 crc kubenswrapper[4675]: I0124 07:14:49.549565 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.549545651 podStartE2EDuration="2.549545651s" podCreationTimestamp="2026-01-24 07:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:49.534402944 +0000 UTC m=+1290.830508167" watchObservedRunningTime="2026-01-24 07:14:49.549545651 +0000 UTC m=+1290.845650874" Jan 24 07:14:50 crc kubenswrapper[4675]: I0124 07:14:50.507980 4675 generic.go:334] "Generic (PLEG): container finished" podID="62846c05-d38a-49de-8303-468e98254357" containerID="80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" exitCode=0 Jan 24 07:14:50 crc kubenswrapper[4675]: I0124 07:14:50.508049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41"} Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.110139 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180250 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180386 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180453 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.180503 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") pod \"62846c05-d38a-49de-8303-468e98254357\" (UID: \"62846c05-d38a-49de-8303-468e98254357\") " Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.182254 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs" (OuterVolumeSpecName: "logs") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.210243 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh" (OuterVolumeSpecName: "kube-api-access-67rnh") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "kube-api-access-67rnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.222090 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.225783 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data" (OuterVolumeSpecName: "config-data") pod "62846c05-d38a-49de-8303-468e98254357" (UID: "62846c05-d38a-49de-8303-468e98254357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.282727 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283038 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62846c05-d38a-49de-8303-468e98254357-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283050 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62846c05-d38a-49de-8303-468e98254357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.283064 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rnh\" (UniqueName: \"kubernetes.io/projected/62846c05-d38a-49de-8303-468e98254357-kube-api-access-67rnh\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62846c05-d38a-49de-8303-468e98254357","Type":"ContainerDied","Data":"95d83f46bb241df69cecd5d5ee865b1d6990f0719b8c49d6308bc5af4308f02a"} Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518945 4675 scope.go:117] "RemoveContainer" containerID="80d35df00d2d5035adc3b9822734918a3159061668385f60be9f212c4e98eb41" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.518164 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.543685 4675 scope.go:117] "RemoveContainer" containerID="224184ae9f63569e4fa7815af3ba54297cb52818ef444f0c88b51bc4890bb311" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.557248 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.567891 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.578839 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: E0124 07:14:51.579353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579371 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: E0124 07:14:51.579392 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579398 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579563 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-log" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.579588 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="62846c05-d38a-49de-8303-468e98254357" containerName="nova-api-api" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.581600 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.589737 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.606764 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.693950 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694264 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694311 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.694396 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.796327 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.796890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.797432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.797881 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.798615 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.802606 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.802795 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" containerID="cri-o://4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" gracePeriod=30 Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.806461 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.807175 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.820676 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"nova-api-0\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " pod="openstack/nova-api-0" Jan 24 07:14:51 crc kubenswrapper[4675]: I0124 07:14:51.901088 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.366605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: W0124 07:14:52.466574 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27fe021c_fb3a_41e9_a491_b3859b6748e6.slice/crio-f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb WatchSource:0}: Error finding container f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb: Status 404 returned error can't find the container with id f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.467343 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.512871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") pod \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\" (UID: \"740dfadf-4d28-4f03-ab2c-cf51c7e078bf\") " Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.519509 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2" (OuterVolumeSpecName: "kube-api-access-q67r2") pod "740dfadf-4d28-4f03-ab2c-cf51c7e078bf" (UID: "740dfadf-4d28-4f03-ab2c-cf51c7e078bf"). InnerVolumeSpecName "kube-api-access-q67r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551732 4675 generic.go:334] "Generic (PLEG): container finished" podID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" exitCode=2 Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551869 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerDied","Data":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551981 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"740dfadf-4d28-4f03-ab2c-cf51c7e078bf","Type":"ContainerDied","Data":"8fc4ca63f03726f8d4f4612fb16075bb874d642ea255530f0cde869af0c01186"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.551997 4675 scope.go:117] "RemoveContainer" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.558586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb"} Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.617640 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q67r2\" (UniqueName: \"kubernetes.io/projected/740dfadf-4d28-4f03-ab2c-cf51c7e078bf-kube-api-access-q67r2\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.618579 4675 scope.go:117] "RemoveContainer" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: E0124 07:14:52.620801 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": container with ID starting with 4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058 not found: ID does not exist" containerID="4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.620905 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058"} err="failed to get container status \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": rpc error: code = NotFound desc = could not find container \"4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058\": container with ID starting with 4924c8e19c85a144fe449bfae264b2f36fd3a57706341b20b044356f59319058 not found: ID does not exist" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.626048 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.647752 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660131 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: E0124 07:14:52.660511 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660528 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.660707 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" containerName="kube-state-metrics" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.661317 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.664419 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.664614 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.671185 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821668 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821794 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821846 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.821940 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.909761 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923799 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.923884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.927987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.928923 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.929071 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.929980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.931604 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b742b344-80ea-48bf-bd28-8f1be00b4442-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.952273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h9xg\" (UniqueName: \"kubernetes.io/projected/b742b344-80ea-48bf-bd28-8f1be00b4442-kube-api-access-4h9xg\") pod \"kube-state-metrics-0\" (UID: \"b742b344-80ea-48bf-bd28-8f1be00b4442\") " pod="openstack/kube-state-metrics-0" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.962276 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62846c05-d38a-49de-8303-468e98254357" path="/var/lib/kubelet/pods/62846c05-d38a-49de-8303-468e98254357/volumes" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.963046 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740dfadf-4d28-4f03-ab2c-cf51c7e078bf" path="/var/lib/kubelet/pods/740dfadf-4d28-4f03-ab2c-cf51c7e078bf/volumes" Jan 24 07:14:52 crc kubenswrapper[4675]: I0124 07:14:52.976054 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.257232 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 24 07:14:53 crc kubenswrapper[4675]: W0124 07:14:53.278932 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb742b344_80ea_48bf_bd28_8f1be00b4442.slice/crio-e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd WatchSource:0}: Error finding container e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd: Status 404 returned error can't find the container with id e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.576926 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b742b344-80ea-48bf-bd28-8f1be00b4442","Type":"ContainerStarted","Data":"e4885528823a9c2db5a0e58475ac435677515f67119dd1a5bff22c18bcaa21bd"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.581335 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.581409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerStarted","Data":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.830273 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.830255094 podStartE2EDuration="2.830255094s" podCreationTimestamp="2026-01-24 07:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:14:53.609044713 +0000 UTC m=+1294.905149936" watchObservedRunningTime="2026-01-24 07:14:53.830255094 +0000 UTC m=+1295.126360307" Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836063 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836509 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" containerID="cri-o://63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836559 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" containerID="cri-o://c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836622 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" containerID="cri-o://d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" gracePeriod=30 Jan 24 07:14:53 crc kubenswrapper[4675]: I0124 07:14:53.836635 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" containerID="cri-o://15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" gracePeriod=30 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.148474 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6565db7666-dt2lk" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.148880 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.591984 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" exitCode=0 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592022 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" exitCode=2 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592017 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592034 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" exitCode=0 Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592056 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.592071 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.597803 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b742b344-80ea-48bf-bd28-8f1be00b4442","Type":"ContainerStarted","Data":"74c7bb11a8e80e07ac874f4ee0791a7d78cffd00427bd158904b53e1c98bfacd"} Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.597863 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 24 07:14:54 crc kubenswrapper[4675]: I0124 07:14:54.618168 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.223786872 podStartE2EDuration="2.618146541s" podCreationTimestamp="2026-01-24 07:14:52 +0000 UTC" firstStartedPulling="2026-01-24 07:14:53.284053506 +0000 UTC m=+1294.580158729" lastFinishedPulling="2026-01-24 07:14:53.678413185 +0000 UTC m=+1294.974518398" observedRunningTime="2026-01-24 07:14:54.61352068 +0000 UTC m=+1295.909625913" watchObservedRunningTime="2026-01-24 07:14:54.618146541 +0000 UTC m=+1295.914251774" Jan 24 07:14:55 crc kubenswrapper[4675]: I0124 07:14:55.962245 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092465 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092505 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092688 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092748 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092768 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092788 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") pod \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\" (UID: \"c11da70b-e611-45b4-af1b-fe7ac3dacb85\") " Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.092958 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.094258 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.096189 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.107184 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq" (OuterVolumeSpecName: "kube-api-access-bxvrq") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "kube-api-access-bxvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.123983 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts" (OuterVolumeSpecName: "scripts") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.128354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.183285 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196208 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196237 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvrq\" (UniqueName: \"kubernetes.io/projected/c11da70b-e611-45b4-af1b-fe7ac3dacb85-kube-api-access-bxvrq\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196250 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c11da70b-e611-45b4-af1b-fe7ac3dacb85-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196259 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.196268 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.201745 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data" (OuterVolumeSpecName: "config-data") pod "c11da70b-e611-45b4-af1b-fe7ac3dacb85" (UID: "c11da70b-e611-45b4-af1b-fe7ac3dacb85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.298355 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11da70b-e611-45b4-af1b-fe7ac3dacb85-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616328 4675 generic.go:334] "Generic (PLEG): container finished" podID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" exitCode=0 Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c11da70b-e611-45b4-af1b-fe7ac3dacb85","Type":"ContainerDied","Data":"588244b07f5a60e0cabab824de73b0c1ab641046dedb0b1f0652661018ee56f9"} Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616423 4675 scope.go:117] "RemoveContainer" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.616562 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.657090 4675 scope.go:117] "RemoveContainer" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.659806 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.679751 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698404 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698784 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698811 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698824 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698830 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698859 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698866 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.698882 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.698888 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699059 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-central-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699074 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="ceilometer-notification-agent" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699083 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="sg-core" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.699095 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" containerName="proxy-httpd" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.700646 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704088 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704328 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.704487 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.711906 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.772355 4675 scope.go:117] "RemoveContainer" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.804199 4675 scope.go:117] "RemoveContainer" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813866 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813903 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.813966 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814032 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.814118 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.824174 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.831672 4675 scope.go:117] "RemoveContainer" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.833487 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": container with ID starting with c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8 not found: ID does not exist" containerID="c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.833545 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8"} err="failed to get container status \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": rpc error: code = NotFound desc = could not find container \"c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8\": container with ID starting with c756c0ee9df3c40a733e2152fc692580db0b829081a1afa04e4778a874eafbc8 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.833581 4675 scope.go:117] "RemoveContainer" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.834013 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": container with ID starting with d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462 not found: ID does not exist" containerID="d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834077 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462"} err="failed to get container status \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": rpc error: code = NotFound desc = could not find container \"d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462\": container with ID starting with d1ae5378c4ca8a2bad7f0a30eb548babcf8b37852d7284400a821a27fa0d6462 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834126 4675 scope.go:117] "RemoveContainer" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.834900 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": container with ID starting with 15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9 not found: ID does not exist" containerID="15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834939 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9"} err="failed to get container status \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": rpc error: code = NotFound desc = could not find container \"15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9\": container with ID starting with 15c61be00db2b94d23b846147058d3c373d3f26014a93bd7bdcf98e58f9bf8e9 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.834993 4675 scope.go:117] "RemoveContainer" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: E0124 07:14:56.835417 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": container with ID starting with 63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04 not found: ID does not exist" containerID="63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.835450 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04"} err="failed to get container status \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": rpc error: code = NotFound desc = could not find container \"63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04\": container with ID starting with 63172c92227219fbb1aa5942268e988abf37f304ffb666a30f22c2bc10de4b04 not found: ID does not exist" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915528 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915635 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915776 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915816 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.915849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.916669 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.916769 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.923080 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.924074 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.924909 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.925408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.933428 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.937231 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"ceilometer-0\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " pod="openstack/ceilometer-0" Jan 24 07:14:56 crc kubenswrapper[4675]: I0124 07:14:56.952172 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11da70b-e611-45b4-af1b-fe7ac3dacb85" path="/var/lib/kubelet/pods/c11da70b-e611-45b4-af1b-fe7ac3dacb85/volumes" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.077410 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.536316 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:14:57 crc kubenswrapper[4675]: W0124 07:14:57.545790 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f8195be_66ea_4e34_807c_1c8eae25ab81.slice/crio-1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e WatchSource:0}: Error finding container 1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e: Status 404 returned error can't find the container with id 1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.548164 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.629365 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e"} Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.910144 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.929540 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.929598 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:14:57 crc kubenswrapper[4675]: I0124 07:14:57.962159 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.317579 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445871 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445898 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445930 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.445965 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.446027 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.446058 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") pod \"6462a086-070a-4998-8a59-cb4ccbf19867\" (UID: \"6462a086-070a-4998-8a59-cb4ccbf19867\") " Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.447098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs" (OuterVolumeSpecName: "logs") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.462086 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.475734 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p" (OuterVolumeSpecName: "kube-api-access-kw79p") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "kube-api-access-kw79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.491900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts" (OuterVolumeSpecName: "scripts") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.496560 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data" (OuterVolumeSpecName: "config-data") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.513352 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.522244 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6462a086-070a-4998-8a59-cb4ccbf19867" (UID: "6462a086-070a-4998-8a59-cb4ccbf19867"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549787 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549815 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549826 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6462a086-070a-4998-8a59-cb4ccbf19867-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549837 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw79p\" (UniqueName: \"kubernetes.io/projected/6462a086-070a-4998-8a59-cb4ccbf19867-kube-api-access-kw79p\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549846 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549855 4675 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6462a086-070a-4998-8a59-cb4ccbf19867-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.549862 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6462a086-070a-4998-8a59-cb4ccbf19867-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.645530 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656017 4675 generic.go:334] "Generic (PLEG): container finished" podID="6462a086-070a-4998-8a59-cb4ccbf19867" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" exitCode=137 Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656090 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6565db7666-dt2lk" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656175 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656213 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6565db7666-dt2lk" event={"ID":"6462a086-070a-4998-8a59-cb4ccbf19867","Type":"ContainerDied","Data":"d950b238b60f1812543dfb4f7f5294f5560f40c993673b23b13c0d2609edbe30"} Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.656234 4675 scope.go:117] "RemoveContainer" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.734327 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.910359 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.933604 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6565db7666-dt2lk"] Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.951040 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.951162 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:14:58 crc kubenswrapper[4675]: I0124 07:14:58.963177 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" path="/var/lib/kubelet/pods/6462a086-070a-4998-8a59-cb4ccbf19867/volumes" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.021569 4675 scope.go:117] "RemoveContainer" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.042999 4675 scope.go:117] "RemoveContainer" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:59 crc kubenswrapper[4675]: E0124 07:14:59.043547 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": container with ID starting with 32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3 not found: ID does not exist" containerID="32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.043578 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3"} err="failed to get container status \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": rpc error: code = NotFound desc = could not find container \"32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3\": container with ID starting with 32a6e6ef5a59609c1a6b4fe207e3f3e536488a464fb05b1f258c19d02b21e2d3 not found: ID does not exist" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.043598 4675 scope.go:117] "RemoveContainer" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: E0124 07:14:59.044074 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": container with ID starting with 1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50 not found: ID does not exist" containerID="1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.044113 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50"} err="failed to get container status \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": rpc error: code = NotFound desc = could not find container \"1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50\": container with ID starting with 1c95e6106c593c85fa5e1d26db252eb286d2adc4d65a941d38f82384fe82af50 not found: ID does not exist" Jan 24 07:14:59 crc kubenswrapper[4675]: I0124 07:14:59.672848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314093 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:00 crc kubenswrapper[4675]: E0124 07:15:00.314501 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314522 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: E0124 07:15:00.314551 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314560 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314751 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon-log" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.314773 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6462a086-070a-4998-8a59-cb4ccbf19867" containerName="horizon" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.315314 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.318552 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.318747 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.337695 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380694 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.380760 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482230 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.482334 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.483107 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.487392 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.497126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"collect-profiles-29487315-nmtzz\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.642897 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:00 crc kubenswrapper[4675]: I0124 07:15:00.700171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.152647 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 07:15:01 crc kubenswrapper[4675]: W0124 07:15:01.160181 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992bc9f8_4adf_4940_95d5_942895a4d935.slice/crio-0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c WatchSource:0}: Error finding container 0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c: Status 404 returned error can't find the container with id 0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713257 4675 generic.go:334] "Generic (PLEG): container finished" podID="992bc9f8-4adf-4940-95d5-942895a4d935" containerID="4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82" exitCode=0 Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerDied","Data":"4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.713535 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerStarted","Data":"0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.719638 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerStarted","Data":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.720490 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.792196 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3503014589999998 podStartE2EDuration="5.792176232s" podCreationTimestamp="2026-01-24 07:14:56 +0000 UTC" firstStartedPulling="2026-01-24 07:14:57.547904541 +0000 UTC m=+1298.844009764" lastFinishedPulling="2026-01-24 07:15:00.989779314 +0000 UTC m=+1302.285884537" observedRunningTime="2026-01-24 07:15:01.788163125 +0000 UTC m=+1303.084268358" watchObservedRunningTime="2026-01-24 07:15:01.792176232 +0000 UTC m=+1303.088281455" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.901982 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:01 crc kubenswrapper[4675]: I0124 07:15:01.902048 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:02 crc kubenswrapper[4675]: I0124 07:15:02.985909 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:02 crc kubenswrapper[4675]: I0124 07:15:02.986048 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.001878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.131614 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.250903 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.251413 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.251466 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") pod \"992bc9f8-4adf-4940-95d5-942895a4d935\" (UID: \"992bc9f8-4adf-4940-95d5-942895a4d935\") " Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.252376 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume" (OuterVolumeSpecName: "config-volume") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.256891 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.257134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt" (OuterVolumeSpecName: "kube-api-access-x7wpt") pod "992bc9f8-4adf-4940-95d5-942895a4d935" (UID: "992bc9f8-4adf-4940-95d5-942895a4d935"). InnerVolumeSpecName "kube-api-access-x7wpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353478 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7wpt\" (UniqueName: \"kubernetes.io/projected/992bc9f8-4adf-4940-95d5-942895a4d935-kube-api-access-x7wpt\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353514 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/992bc9f8-4adf-4940-95d5-942895a4d935-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.353528 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/992bc9f8-4adf-4940-95d5-942895a4d935-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.742779 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" event={"ID":"992bc9f8-4adf-4940-95d5-942895a4d935","Type":"ContainerDied","Data":"0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c"} Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.742823 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa6ea80827798d6b17c9a2c7d23da940c673bc9dc856f8e3d6e5e31e6a0562c" Jan 24 07:15:03 crc kubenswrapper[4675]: I0124 07:15:03.743156 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.935473 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.938111 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:07 crc kubenswrapper[4675]: I0124 07:15:07.951072 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.629923 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.629988 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:15:08 crc kubenswrapper[4675]: I0124 07:15:08.790854 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.705624 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776091 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776230 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.776263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") pod \"a382715e-bef1-47d2-872f-21ffbda9df32\" (UID: \"a382715e-bef1-47d2-872f-21ffbda9df32\") " Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.782198 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc" (OuterVolumeSpecName: "kube-api-access-nbjjc") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "kube-api-access-nbjjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.800072 4675 generic.go:334] "Generic (PLEG): container finished" podID="a382715e-bef1-47d2-872f-21ffbda9df32" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" exitCode=137 Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.801886 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802101 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerDied","Data":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802164 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a382715e-bef1-47d2-872f-21ffbda9df32","Type":"ContainerDied","Data":"738037063e3609a68332a4891fdfb34e8c71d23849dea8ebc3779041066480cc"} Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.802188 4675 scope.go:117] "RemoveContainer" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.809536 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.810881 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data" (OuterVolumeSpecName: "config-data") pod "a382715e-bef1-47d2-872f-21ffbda9df32" (UID: "a382715e-bef1-47d2-872f-21ffbda9df32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.878522 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.879021 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbjjc\" (UniqueName: \"kubernetes.io/projected/a382715e-bef1-47d2-872f-21ffbda9df32-kube-api-access-nbjjc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.879115 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a382715e-bef1-47d2-872f-21ffbda9df32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.883025 4675 scope.go:117] "RemoveContainer" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: E0124 07:15:09.883521 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": container with ID starting with f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a not found: ID does not exist" containerID="f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a" Jan 24 07:15:09 crc kubenswrapper[4675]: I0124 07:15:09.883553 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a"} err="failed to get container status \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": rpc error: code = NotFound desc = could not find container \"f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a\": container with ID starting with f50fb8964e70d90d1eccc46c24ba61c8816f72f69cbd62424e7e960adaf3a24a not found: ID does not exist" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.141499 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.152074 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.177921 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: E0124 07:15:10.178366 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178387 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: E0124 07:15:10.178429 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178437 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178635 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" containerName="nova-cell1-novncproxy-novncproxy" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.178664 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" containerName="collect-profiles" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.179432 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185568 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185843 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.185993 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.198005 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286691 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286813 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286856 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.286899 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.287038 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389109 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389329 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389358 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.389391 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.393435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.393629 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.394896 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.400633 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.406849 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfx94\" (UniqueName: \"kubernetes.io/projected/a485ae65-6b4d-4cc6-9623-dc0b722f47e8-kube-api-access-qfx94\") pod \"nova-cell1-novncproxy-0\" (UID: \"a485ae65-6b4d-4cc6-9623-dc0b722f47e8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.510214 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:10 crc kubenswrapper[4675]: I0124 07:15:10.956565 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a382715e-bef1-47d2-872f-21ffbda9df32" path="/var/lib/kubelet/pods/a382715e-bef1-47d2-872f-21ffbda9df32/volumes" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.012488 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 24 07:15:11 crc kubenswrapper[4675]: W0124 07:15:11.016970 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda485ae65_6b4d_4cc6_9623_dc0b722f47e8.slice/crio-32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d WatchSource:0}: Error finding container 32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d: Status 404 returned error can't find the container with id 32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.830938 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a485ae65-6b4d-4cc6-9623-dc0b722f47e8","Type":"ContainerStarted","Data":"0b0de0d2f11ca21b621155c6914f60384a4f54529e763fda6e9631401b8bc8e8"} Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.831002 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a485ae65-6b4d-4cc6-9623-dc0b722f47e8","Type":"ContainerStarted","Data":"32d8a7e6bdb666f46eefbbddbd3ffdd7479ee88dcb47323f8514cacec900727d"} Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.854539 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8545182 podStartE2EDuration="1.8545182s" podCreationTimestamp="2026-01-24 07:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:11.849045497 +0000 UTC m=+1313.145150740" watchObservedRunningTime="2026-01-24 07:15:11.8545182 +0000 UTC m=+1313.150623433" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.905537 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.906235 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.908335 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:11 crc kubenswrapper[4675]: I0124 07:15:11.916680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:12 crc kubenswrapper[4675]: I0124 07:15:12.840285 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:12 crc kubenswrapper[4675]: I0124 07:15:12.844648 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.060384 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.062316 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.086939 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159156 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159211 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.159771 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.260968 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261026 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261083 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.261192 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.262174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.262672 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.263213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.263887 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.264525 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.288657 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"dnsmasq-dns-cd5cbd7b9-2vwtf\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.390658 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:13 crc kubenswrapper[4675]: I0124 07:15:13.903542 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.857573 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerID="e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e" exitCode=0 Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.857645 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e"} Jan 24 07:15:14 crc kubenswrapper[4675]: I0124 07:15:14.858964 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerStarted","Data":"2c3c2a43e5e1f891dc078496766b3dbc527e0916a446f16d71f3f1e737ccce2c"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.257178 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.257625 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" containerID="cri-o://06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258397 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" containerID="cri-o://e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258533 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" containerID="cri-o://d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.258602 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" containerID="cri-o://d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.340880 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.341815 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.510874 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.869156 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerStarted","Data":"0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.870216 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873467 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" exitCode=0 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873491 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" exitCode=2 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873499 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" exitCode=0 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873698 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" containerID="cri-o://f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873930 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.873980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.874047 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" containerID="cri-o://47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" gracePeriod=30 Jan 24 07:15:15 crc kubenswrapper[4675]: I0124 07:15:15.903469 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podStartSLOduration=2.903447606 podStartE2EDuration="2.903447606s" podCreationTimestamp="2026-01-24 07:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:15.896744073 +0000 UTC m=+1317.192849336" watchObservedRunningTime="2026-01-24 07:15:15.903447606 +0000 UTC m=+1317.199552829" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.574102 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628885 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628917 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.628980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629050 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629071 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629129 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") pod \"3f8195be-66ea-4e34-807c-1c8eae25ab81\" (UID: \"3f8195be-66ea-4e34-807c-1c8eae25ab81\") " Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.629813 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.630095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.652059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66" (OuterVolumeSpecName: "kube-api-access-5xx66") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "kube-api-access-5xx66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.653427 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts" (OuterVolumeSpecName: "scripts") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.694117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733363 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733392 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f8195be-66ea-4e34-807c-1c8eae25ab81-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733402 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xx66\" (UniqueName: \"kubernetes.io/projected/3f8195be-66ea-4e34-807c-1c8eae25ab81-kube-api-access-5xx66\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.733411 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.802201 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.824441 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.832809 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data" (OuterVolumeSpecName: "config-data") pod "3f8195be-66ea-4e34-807c-1c8eae25ab81" (UID: "3f8195be-66ea-4e34-807c-1c8eae25ab81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835853 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835873 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.835886 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f8195be-66ea-4e34-807c-1c8eae25ab81-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884537 4675 generic.go:334] "Generic (PLEG): container finished" podID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" exitCode=0 Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884939 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.884914 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.885697 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f8195be-66ea-4e34-807c-1c8eae25ab81","Type":"ContainerDied","Data":"1a359bd29e289a4066ee4c4bf87970406ead8199cb47846908dabb10e863741e"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.885789 4675 scope.go:117] "RemoveContainer" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.887156 4675 generic.go:334] "Generic (PLEG): container finished" podID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" exitCode=143 Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.887758 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.910670 4675 scope.go:117] "RemoveContainer" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.923533 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.931254 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.938547 4675 scope.go:117] "RemoveContainer" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.957706 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" path="/var/lib/kubelet/pods/3f8195be-66ea-4e34-807c-1c8eae25ab81/volumes" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958560 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958873 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958888 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958907 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958914 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958926 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958931 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: E0124 07:15:16.958953 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.958959 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959115 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-central-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959132 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="sg-core" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959145 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="ceilometer-notification-agent" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.959157 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8195be-66ea-4e34-807c-1c8eae25ab81" containerName="proxy-httpd" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.960957 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.961080 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967634 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967779 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.967856 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:15:16 crc kubenswrapper[4675]: I0124 07:15:16.989400 4675 scope.go:117] "RemoveContainer" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.024649 4675 scope.go:117] "RemoveContainer" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025123 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": container with ID starting with d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f not found: ID does not exist" containerID="d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025167 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f"} err="failed to get container status \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": rpc error: code = NotFound desc = could not find container \"d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f\": container with ID starting with d153e90d258098198b0620d5fc57a4a9dff0906c06dd83906a1bd81fafbd5c8f not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025193 4675 scope.go:117] "RemoveContainer" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025511 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": container with ID starting with e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c not found: ID does not exist" containerID="e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025547 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c"} err="failed to get container status \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": rpc error: code = NotFound desc = could not find container \"e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c\": container with ID starting with e8fc71e24af1307d79f37d323fac5c8d6ccec47511e29522ab6917a89f55165c not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025575 4675 scope.go:117] "RemoveContainer" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.025928 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": container with ID starting with d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec not found: ID does not exist" containerID="d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025952 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec"} err="failed to get container status \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": rpc error: code = NotFound desc = could not find container \"d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec\": container with ID starting with d84b5fed327d3c6b94cf1616df4e2d0472cc9426d81e4bce08867fcfe464fbec not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.025965 4675 scope.go:117] "RemoveContainer" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: E0124 07:15:17.026181 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": container with ID starting with 06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914 not found: ID does not exist" containerID="06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.026208 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914"} err="failed to get container status \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": rpc error: code = NotFound desc = could not find container \"06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914\": container with ID starting with 06b60bc2bb5286079509b18c29729b0ee79d81b913e9c8fd08ebbfb135611914 not found: ID does not exist" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040651 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040855 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040889 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040927 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.040984 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041035 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041071 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.041095 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.142940 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143019 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143058 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143092 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143129 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.143251 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.144321 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.144369 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148270 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148272 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.148600 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.157228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.163627 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"ceilometer-0\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.281121 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.594643 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.828439 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:17 crc kubenswrapper[4675]: I0124 07:15:17.920545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"184d1377ee27c4e0aa9c78bfe778ae0e24d8f487e027e7a4ab2ff93d1556f7a5"} Jan 24 07:15:18 crc kubenswrapper[4675]: I0124 07:15:18.936619 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.424057 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.601878 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.601976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602186 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602216 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") pod \"27fe021c-fb3a-41e9-a491-b3859b6748e6\" (UID: \"27fe021c-fb3a-41e9-a491-b3859b6748e6\") " Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs" (OuterVolumeSpecName: "logs") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.602599 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27fe021c-fb3a-41e9-a491-b3859b6748e6-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.613145 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx" (OuterVolumeSpecName: "kube-api-access-xgxcx") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "kube-api-access-xgxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.662305 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data" (OuterVolumeSpecName: "config-data") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.672898 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27fe021c-fb3a-41e9-a491-b3859b6748e6" (UID: "27fe021c-fb3a-41e9-a491-b3859b6748e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705645 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705671 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27fe021c-fb3a-41e9-a491-b3859b6748e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.705681 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxcx\" (UniqueName: \"kubernetes.io/projected/27fe021c-fb3a-41e9-a491-b3859b6748e6-kube-api-access-xgxcx\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953278 4675 generic.go:334] "Generic (PLEG): container finished" podID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" exitCode=0 Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953518 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27fe021c-fb3a-41e9-a491-b3859b6748e6","Type":"ContainerDied","Data":"f8066edee7f8292eab8a240fbaa2f4738635a9541273589250ca9f84f41a04cb"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953559 4675 scope.go:117] "RemoveContainer" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.953670 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.957798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab"} Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.979590 4675 scope.go:117] "RemoveContainer" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:19 crc kubenswrapper[4675]: I0124 07:15:19.989221 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.001071 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.015505 4675 scope.go:117] "RemoveContainer" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.026743 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.027082 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027099 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.027143 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027267 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027439 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-api" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.027460 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" containerName="nova-api-log" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.028973 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.034640 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.034850 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.035216 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.037597 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": container with ID starting with 47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3 not found: ID does not exist" containerID="47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.037653 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3"} err="failed to get container status \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": rpc error: code = NotFound desc = could not find container \"47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3\": container with ID starting with 47936a4b84ec2f90c980fbc499a5cd7c7366d9b5663a44cd841eca0a28e72af3 not found: ID does not exist" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.037688 4675 scope.go:117] "RemoveContainer" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:20 crc kubenswrapper[4675]: E0124 07:15:20.040789 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": container with ID starting with f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055 not found: ID does not exist" containerID="f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.040833 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055"} err="failed to get container status \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": rpc error: code = NotFound desc = could not find container \"f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055\": container with ID starting with f89974c41944bb36de426b625000e3e7b92c443ca5f60f606665f2455c6ab055 not found: ID does not exist" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.055425 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213029 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213669 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213793 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.213880 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.214003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316165 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316513 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316660 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.316920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317057 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317211 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.317012 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.320739 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.320953 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.323085 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.323286 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.335296 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"nova-api-0\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.354348 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.517707 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.545604 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:20 crc kubenswrapper[4675]: W0124 07:15:20.879687 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce57208f_54cc_491b_b898_ba4fddd26d3c.slice/crio-15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee WatchSource:0}: Error finding container 15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee: Status 404 returned error can't find the container with id 15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.901961 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.955973 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27fe021c-fb3a-41e9-a491-b3859b6748e6" path="/var/lib/kubelet/pods/27fe021c-fb3a-41e9-a491-b3859b6748e6/volumes" Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.966661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee"} Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.969114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc"} Jan 24 07:15:20 crc kubenswrapper[4675]: I0124 07:15:20.986885 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.281245 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.282908 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.299460 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.300086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.304558 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367441 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367615 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.367708 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469423 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469483 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469585 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.469630 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.473165 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.477336 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.478371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.485037 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"nova-cell1-cell-mapping-dxv2k\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.612632 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.980451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.980771 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerStarted","Data":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984423 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" containerID="cri-o://74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984521 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" containerID="cri-o://e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984558 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" containerID="cri-o://bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984588 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" containerID="cri-o://1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" gracePeriod=30 Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerStarted","Data":"e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56"} Jan 24 07:15:21 crc kubenswrapper[4675]: I0124 07:15:21.984797 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.009036 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.00901277 podStartE2EDuration="2.00901277s" podCreationTimestamp="2026-01-24 07:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:22.00198195 +0000 UTC m=+1323.298087183" watchObservedRunningTime="2026-01-24 07:15:22.00901277 +0000 UTC m=+1323.305117993" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.199658 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.612290511 podStartE2EDuration="6.19962938s" podCreationTimestamp="2026-01-24 07:15:16 +0000 UTC" firstStartedPulling="2026-01-24 07:15:17.841491099 +0000 UTC m=+1319.137596322" lastFinishedPulling="2026-01-24 07:15:21.428829968 +0000 UTC m=+1322.724935191" observedRunningTime="2026-01-24 07:15:22.034203091 +0000 UTC m=+1323.330308304" watchObservedRunningTime="2026-01-24 07:15:22.19962938 +0000 UTC m=+1323.495734603" Jan 24 07:15:22 crc kubenswrapper[4675]: I0124 07:15:22.202938 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002686 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" exitCode=0 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002986 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" exitCode=2 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002999 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" exitCode=0 Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.002752 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.003059 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.003073 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.007151 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerStarted","Data":"d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.007174 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerStarted","Data":"cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358"} Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.037173 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dxv2k" podStartSLOduration=2.03714811 podStartE2EDuration="2.03714811s" podCreationTimestamp="2026-01-24 07:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:23.024405531 +0000 UTC m=+1324.320510754" watchObservedRunningTime="2026-01-24 07:15:23.03714811 +0000 UTC m=+1324.333253343" Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.391881 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.481759 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:23 crc kubenswrapper[4675]: I0124 07:15:23.482169 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" containerID="cri-o://c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" gracePeriod=10 Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.020948 4675 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerID="c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" exitCode=0 Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021072 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7"} Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021278 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" event={"ID":"a9bf7666-9ba5-43db-a358-1a2df0e0b118","Type":"ContainerDied","Data":"0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb"} Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.021298 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f3336fcb7e0683104f9711241a9a218a04294fb74555900473d7d223d3a17cb" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.048214 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138617 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138737 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138759 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138789 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138814 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.138866 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") pod \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\" (UID: \"a9bf7666-9ba5-43db-a358-1a2df0e0b118\") " Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.152296 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc" (OuterVolumeSpecName: "kube-api-access-vmmvc") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "kube-api-access-vmmvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.210003 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.212017 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.224278 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config" (OuterVolumeSpecName: "config") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.225343 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241734 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241774 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmmvc\" (UniqueName: \"kubernetes.io/projected/a9bf7666-9ba5-43db-a358-1a2df0e0b118-kube-api-access-vmmvc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241786 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241801 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.241812 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.258741 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9bf7666-9ba5-43db-a358-1a2df0e0b118" (UID: "a9bf7666-9ba5-43db-a358-1a2df0e0b118"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:15:24 crc kubenswrapper[4675]: I0124 07:15:24.343409 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9bf7666-9ba5-43db-a358-1a2df0e0b118-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.030037 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-9mbx2" Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.066607 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:25 crc kubenswrapper[4675]: I0124 07:15:25.077011 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-9mbx2"] Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.042545 4675 generic.go:334] "Generic (PLEG): container finished" podID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerID="74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" exitCode=0 Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.042586 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c"} Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.317994 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.382936 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383042 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383093 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383117 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383161 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383180 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383276 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.383340 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") pod \"fe8d76d6-1c10-4489-8dd7-913259f97b21\" (UID: \"fe8d76d6-1c10-4489-8dd7-913259f97b21\") " Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.385385 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.387487 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.391428 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts" (OuterVolumeSpecName: "scripts") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.408590 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5" (OuterVolumeSpecName: "kube-api-access-8sbn5") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "kube-api-access-8sbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.444496 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.470452 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.485981 4675 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486031 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbn5\" (UniqueName: \"kubernetes.io/projected/fe8d76d6-1c10-4489-8dd7-913259f97b21-kube-api-access-8sbn5\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486044 4675 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe8d76d6-1c10-4489-8dd7-913259f97b21-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486098 4675 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486112 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.486126 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.500902 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.512071 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data" (OuterVolumeSpecName: "config-data") pod "fe8d76d6-1c10-4489-8dd7-913259f97b21" (UID: "fe8d76d6-1c10-4489-8dd7-913259f97b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.587171 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.587204 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8d76d6-1c10-4489-8dd7-913259f97b21-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:26 crc kubenswrapper[4675]: I0124 07:15:26.954471 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" path="/var/lib/kubelet/pods/a9bf7666-9ba5-43db-a358-1a2df0e0b118/volumes" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055121 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe8d76d6-1c10-4489-8dd7-913259f97b21","Type":"ContainerDied","Data":"184d1377ee27c4e0aa9c78bfe778ae0e24d8f487e027e7a4ab2ff93d1556f7a5"} Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055195 4675 scope.go:117] "RemoveContainer" containerID="e22826235f6c0a5f7fcb14945a7c72821a350b6b9d34c9fed1ac60a55ae29e56" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.055285 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.089051 4675 scope.go:117] "RemoveContainer" containerID="bda3449e20a1024a10e072c7fb1aeedc4fbba8ed22e5ecb699bd653d3b26dffc" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.126405 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.139786 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150223 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150801 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150823 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150832 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150848 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150883 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150893 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="init" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150901 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="init" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150914 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150923 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: E0124 07:15:27.150939 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.150949 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151178 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="sg-core" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151197 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bf7666-9ba5-43db-a358-1a2df0e0b118" containerName="dnsmasq-dns" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151213 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-central-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151229 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="ceilometer-notification-agent" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.151246 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" containerName="proxy-httpd" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.155492 4675 scope.go:117] "RemoveContainer" containerID="1abc59d24561abccf1767872a358579cf873d1091e5836e14169cd76c0cd3aab" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.165363 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170294 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170412 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.170584 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.176551 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.194777 4675 scope.go:117] "RemoveContainer" containerID="74fe895fd181a36cae7a5cb232607ff9ad42b50773c595b8edf5ca1fd42d6a6c" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304186 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304294 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304345 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304527 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304571 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.304663 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406302 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406657 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406803 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406828 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406903 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406948 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.406973 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.408077 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-run-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.408492 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed571c62-3ced-4952-a932-37a5a84da52f-log-httpd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.412914 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-scripts\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.413117 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.421691 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-config-data\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.423098 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.423514 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed571c62-3ced-4952-a932-37a5a84da52f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.439228 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kkd\" (UniqueName: \"kubernetes.io/projected/ed571c62-3ced-4952-a932-37a5a84da52f-kube-api-access-l9kkd\") pod \"ceilometer-0\" (UID: \"ed571c62-3ced-4952-a932-37a5a84da52f\") " pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.485044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 24 07:15:27 crc kubenswrapper[4675]: I0124 07:15:27.933537 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.063415 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"5054ee53168c2196b91018ade5a9641f6cc03ad48c115ecdacfa04ce81b91961"} Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.066584 4675 generic.go:334] "Generic (PLEG): container finished" podID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerID="d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4" exitCode=0 Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.066622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerDied","Data":"d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4"} Jan 24 07:15:28 crc kubenswrapper[4675]: I0124 07:15:28.955542 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8d76d6-1c10-4489-8dd7-913259f97b21" path="/var/lib/kubelet/pods/fe8d76d6-1c10-4489-8dd7-913259f97b21/volumes" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.076701 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"cf75ef15edf87dd90a39b57b53a81a05a390853dde0b6283a4e627d1e08f0a2b"} Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.552741 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.747562 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748060 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748140 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.748171 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") pod \"cd0aa104-48a4-4eab-afcc-2ef03d860551\" (UID: \"cd0aa104-48a4-4eab-afcc-2ef03d860551\") " Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.753397 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts" (OuterVolumeSpecName: "scripts") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.753950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n" (OuterVolumeSpecName: "kube-api-access-pkf5n") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "kube-api-access-pkf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.776247 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data" (OuterVolumeSpecName: "config-data") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.779484 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0aa104-48a4-4eab-afcc-2ef03d860551" (UID: "cd0aa104-48a4-4eab-afcc-2ef03d860551"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849916 4675 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-scripts\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849942 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849955 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkf5n\" (UniqueName: \"kubernetes.io/projected/cd0aa104-48a4-4eab-afcc-2ef03d860551-kube-api-access-pkf5n\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:29 crc kubenswrapper[4675]: I0124 07:15:29.849964 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0aa104-48a4-4eab-afcc-2ef03d860551-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086133 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dxv2k" event={"ID":"cd0aa104-48a4-4eab-afcc-2ef03d860551","Type":"ContainerDied","Data":"cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086174 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb36a9d1535f5681537b95b7ba3c55273a5ae88f25755d1811dccbc09984b358" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.086236 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dxv2k" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.092239 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"e19f9d1428d421170057abb69b4e4f629df4265a476a0339f809f9fcae412d46"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.092276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"430402ce13c206a02b019a617bebe3e98cc7555beb74bf43df44b234e24f8704"} Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.312506 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.313330 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" containerID="cri-o://c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.313655 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" containerID="cri-o://f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.331230 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.331443 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" containerID="cri-o://b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365402 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365605 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" containerID="cri-o://f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.365757 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" containerID="cri-o://0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" gracePeriod=30 Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.854520 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.969947 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970023 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970085 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970115 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970135 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.970215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") pod \"ce57208f-54cc-491b-b898-ba4fddd26d3c\" (UID: \"ce57208f-54cc-491b-b898-ba4fddd26d3c\") " Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.971772 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs" (OuterVolumeSpecName: "logs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:30 crc kubenswrapper[4675]: I0124 07:15:30.976748 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j" (OuterVolumeSpecName: "kube-api-access-wpp7j") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "kube-api-access-wpp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.000012 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.011096 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data" (OuterVolumeSpecName: "config-data") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.022823 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.043844 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce57208f-54cc-491b-b898-ba4fddd26d3c" (UID: "ce57208f-54cc-491b-b898-ba4fddd26d3c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.071975 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpp7j\" (UniqueName: \"kubernetes.io/projected/ce57208f-54cc-491b-b898-ba4fddd26d3c-kube-api-access-wpp7j\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072028 4675 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072039 4675 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072048 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072058 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce57208f-54cc-491b-b898-ba4fddd26d3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.072066 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce57208f-54cc-491b-b898-ba4fddd26d3c-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103882 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" exitCode=0 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105099 4675 generic.go:334] "Generic (PLEG): container finished" podID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" exitCode=143 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103939 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.103923 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105370 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce57208f-54cc-491b-b898-ba4fddd26d3c","Type":"ContainerDied","Data":"15e1570d93e869aa39d375fd2b3930f128f62e919f69a8e3b2a153bcd883ebee"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.105386 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.107665 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc3534ca-1196-47a7-889c-cead596f7636" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" exitCode=143 Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.107709 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.132585 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.142702 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.154094 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.155783 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.155831 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} err="failed to get container status \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.155877 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.156181 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.156252 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} err="failed to get container status \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.156272 4675 scope.go:117] "RemoveContainer" containerID="c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159183 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722"} err="failed to get container status \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": rpc error: code = NotFound desc = could not find container \"c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722\": container with ID starting with c55df66400b4920f7d32bd01f94cc290ac8cae3a14deb4cf5ac57d1f00827722 not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159262 4675 scope.go:117] "RemoveContainer" containerID="f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159500 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.159561 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc"} err="failed to get container status \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": rpc error: code = NotFound desc = could not find container \"f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc\": container with ID starting with f7e9eab94b88633b1b7e5eb6f9571745a25c9935b12198e16194de4e6056c6fc not found: ID does not exist" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.184547 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185027 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185042 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185056 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185063 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: E0124 07:15:31.185083 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185089 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185279 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-log" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185301 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" containerName="nova-manage" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.185313 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" containerName="nova-api-api" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.186278 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.189869 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.191524 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.191734 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.217070 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.275816 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.275892 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276023 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276100 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276327 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.276467 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.379473 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.380532 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381015 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381189 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381348 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.381661 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a0d5c5-541f-4a43-9d20-22264dca21d1-logs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.383905 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-config-data\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.391162 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.393159 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.401364 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7d8\" (UniqueName: \"kubernetes.io/projected/95a0d5c5-541f-4a43-9d20-22264dca21d1-kube-api-access-dg7d8\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.403435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a0d5c5-541f-4a43-9d20-22264dca21d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"95a0d5c5-541f-4a43-9d20-22264dca21d1\") " pod="openstack/nova-api-0" Jan 24 07:15:31 crc kubenswrapper[4675]: I0124 07:15:31.587817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.093296 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 24 07:15:32 crc kubenswrapper[4675]: W0124 07:15:32.101161 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a0d5c5_541f_4a43_9d20_22264dca21d1.slice/crio-dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922 WatchSource:0}: Error finding container dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922: Status 404 returned error can't find the container with id dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922 Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.121771 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed571c62-3ced-4952-a932-37a5a84da52f","Type":"ContainerStarted","Data":"3fe57003c181a071a09a4bb572dd905afb384399d75bdada8cd28f68e2743a29"} Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.122777 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.123406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"dd306f159bfdd8ea1382e542f82e5a4fc20851254c43893bfb99701398e66922"} Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.139569 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7646933809999998 podStartE2EDuration="5.13955107s" podCreationTimestamp="2026-01-24 07:15:27 +0000 UTC" firstStartedPulling="2026-01-24 07:15:27.953340096 +0000 UTC m=+1329.249445319" lastFinishedPulling="2026-01-24 07:15:31.328197785 +0000 UTC m=+1332.624303008" observedRunningTime="2026-01-24 07:15:32.137229574 +0000 UTC m=+1333.433334797" watchObservedRunningTime="2026-01-24 07:15:32.13955107 +0000 UTC m=+1333.435656293" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.631634 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815238 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815355 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.815571 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") pod \"39c6c830-f77b-47f7-a874-02324d6c8c39\" (UID: \"39c6c830-f77b-47f7-a874-02324d6c8c39\") " Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.820965 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn" (OuterVolumeSpecName: "kube-api-access-hb9jn") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "kube-api-access-hb9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.852489 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.853129 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data" (OuterVolumeSpecName: "config-data") pod "39c6c830-f77b-47f7-a874-02324d6c8c39" (UID: "39c6c830-f77b-47f7-a874-02324d6c8c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917558 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917593 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9jn\" (UniqueName: \"kubernetes.io/projected/39c6c830-f77b-47f7-a874-02324d6c8c39-kube-api-access-hb9jn\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.917603 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c6c830-f77b-47f7-a874-02324d6c8c39-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:32 crc kubenswrapper[4675]: I0124 07:15:32.953283 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce57208f-54cc-491b-b898-ba4fddd26d3c" path="/var/lib/kubelet/pods/ce57208f-54cc-491b-b898-ba4fddd26d3c/volumes" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.139577 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"646875f2527b96c84e70c52ec9ae92ee249a13a5911412bb88ba4b7d05634e11"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.139621 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"95a0d5c5-541f-4a43-9d20-22264dca21d1","Type":"ContainerStarted","Data":"46150f884abff5f9f82b371b50a0ee90cfd445db579317c3a86904ce248eb317"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.145443 4675 generic.go:334] "Generic (PLEG): container finished" podID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" exitCode=0 Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.146687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147254 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerDied","Data":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147292 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39c6c830-f77b-47f7-a874-02324d6c8c39","Type":"ContainerDied","Data":"e5cc1510bbf0d31557f24e623b20f8c07ccabe6a37acaaa3850bbc9a59202c9b"} Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.147311 4675 scope.go:117] "RemoveContainer" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.170576 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.170556639 podStartE2EDuration="2.170556639s" podCreationTimestamp="2026-01-24 07:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:33.163502238 +0000 UTC m=+1334.459607461" watchObservedRunningTime="2026-01-24 07:15:33.170556639 +0000 UTC m=+1334.466661862" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.193502 4675 scope.go:117] "RemoveContainer" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.193681 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: E0124 07:15:33.194473 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": container with ID starting with b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188 not found: ID does not exist" containerID="b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.194511 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188"} err="failed to get container status \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": rpc error: code = NotFound desc = could not find container \"b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188\": container with ID starting with b82fca1574d61816baa952d4bfa01f44e8f4cd933e45de929da90cb20b78d188 not found: ID does not exist" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.218442 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.247403 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: E0124 07:15:33.251042 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.251068 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.251782 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" containerName="nova-scheduler-scheduler" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.255379 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.258322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.307057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346201 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346318 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.346346 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.448466 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.454459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.454481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b5d16-2808-40ad-88a0-f07fd4c33e3e-config-data\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.466325 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hssd\" (UniqueName: \"kubernetes.io/projected/361b5d16-2808-40ad-88a0-f07fd4c33e3e-kube-api-access-2hssd\") pod \"nova-scheduler-0\" (UID: \"361b5d16-2808-40ad-88a0-f07fd4c33e3e\") " pod="openstack/nova-scheduler-0" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.521288 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58304->10.217.0.197:8775: read: connection reset by peer" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.521334 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:58312->10.217.0.197:8775: read: connection reset by peer" Jan 24 07:15:33 crc kubenswrapper[4675]: I0124 07:15:33.602408 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.003556 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066467 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066549 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066647 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066708 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.066809 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") pod \"bc3534ca-1196-47a7-889c-cead596f7636\" (UID: \"bc3534ca-1196-47a7-889c-cead596f7636\") " Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.071811 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs" (OuterVolumeSpecName: "logs") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.073504 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb" (OuterVolumeSpecName: "kube-api-access-9xhxb") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "kube-api-access-9xhxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.125216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data" (OuterVolumeSpecName: "config-data") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.141786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161035 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc3534ca-1196-47a7-889c-cead596f7636" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" exitCode=0 Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161146 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161173 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.161284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bc3534ca-1196-47a7-889c-cead596f7636","Type":"ContainerDied","Data":"c3d7107344b9836921068d336a5221f9cb286c5dd329b4c60c4a920f0a79e048"} Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.162067 4675 scope.go:117] "RemoveContainer" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168510 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168545 4675 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3534ca-1196-47a7-889c-cead596f7636-logs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168558 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xhxb\" (UniqueName: \"kubernetes.io/projected/bc3534ca-1196-47a7-889c-cead596f7636-kube-api-access-9xhxb\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.168572 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.169051 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bc3534ca-1196-47a7-889c-cead596f7636" (UID: "bc3534ca-1196-47a7-889c-cead596f7636"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.199260 4675 scope.go:117] "RemoveContainer" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: W0124 07:15:34.209555 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod361b5d16_2808_40ad_88a0_f07fd4c33e3e.slice/crio-56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d WatchSource:0}: Error finding container 56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d: Status 404 returned error can't find the container with id 56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.209957 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.270179 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3534ca-1196-47a7-889c-cead596f7636-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.360660 4675 scope.go:117] "RemoveContainer" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.361096 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": container with ID starting with 0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204 not found: ID does not exist" containerID="0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361130 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204"} err="failed to get container status \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": rpc error: code = NotFound desc = could not find container \"0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204\": container with ID starting with 0ca10fa8048479a30ea8b232307e95f0d7f171dd9061f7b8fb126eaaa29c9204 not found: ID does not exist" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361157 4675 scope.go:117] "RemoveContainer" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.361502 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": container with ID starting with f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211 not found: ID does not exist" containerID="f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.361528 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211"} err="failed to get container status \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": rpc error: code = NotFound desc = could not find container \"f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211\": container with ID starting with f1d21e296b8f380cbb1b2daafa21b1f675235483c3032c9ebce2601426d45211 not found: ID does not exist" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.494589 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.508431 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516387 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.516774 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: E0124 07:15:34.516802 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516809 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.516980 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-log" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.517005 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3534ca-1196-47a7-889c-cead596f7636" containerName="nova-metadata-metadata" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.517872 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.519746 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.522781 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.534893 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575737 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575795 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575845 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.575876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678053 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678299 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678443 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.678669 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.679147 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d55e1385-c016-4bb9-afc2-a070f5a88241-logs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.682919 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.685238 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-config-data\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.688489 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e1385-c016-4bb9-afc2-a070f5a88241-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.695279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb8k\" (UniqueName: \"kubernetes.io/projected/d55e1385-c016-4bb9-afc2-a070f5a88241-kube-api-access-vgb8k\") pod \"nova-metadata-0\" (UID: \"d55e1385-c016-4bb9-afc2-a070f5a88241\") " pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.833960 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.963702 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c6c830-f77b-47f7-a874-02324d6c8c39" path="/var/lib/kubelet/pods/39c6c830-f77b-47f7-a874-02324d6c8c39/volumes" Jan 24 07:15:34 crc kubenswrapper[4675]: I0124 07:15:34.965792 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3534ca-1196-47a7-889c-cead596f7636" path="/var/lib/kubelet/pods/bc3534ca-1196-47a7-889c-cead596f7636/volumes" Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.187153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"361b5d16-2808-40ad-88a0-f07fd4c33e3e","Type":"ContainerStarted","Data":"bd3399e6747c252d7babb83956312d2708d980c161f2022e34c6f7fe7deb36da"} Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.187195 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"361b5d16-2808-40ad-88a0-f07fd4c33e3e","Type":"ContainerStarted","Data":"56ffb3002ec415f142d1f9b7d97ac57a7f8d93662e701539a42de63222606d9d"} Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.227151 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.227130185 podStartE2EDuration="2.227130185s" podCreationTimestamp="2026-01-24 07:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:35.215217937 +0000 UTC m=+1336.511323160" watchObservedRunningTime="2026-01-24 07:15:35.227130185 +0000 UTC m=+1336.523235418" Jan 24 07:15:35 crc kubenswrapper[4675]: I0124 07:15:35.295841 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 24 07:15:35 crc kubenswrapper[4675]: W0124 07:15:35.295967 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55e1385_c016_4bb9_afc2_a070f5a88241.slice/crio-680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3 WatchSource:0}: Error finding container 680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3: Status 404 returned error can't find the container with id 680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3 Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205138 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"0800c0553de08ed7748a32fa828ba7b206d1eadc16175e7e9248f918dd2247b1"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205482 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"bc941574cbce074f050c4711401dadee97817cd498ae5f78af3f88de274a2d67"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.205497 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d55e1385-c016-4bb9-afc2-a070f5a88241","Type":"ContainerStarted","Data":"680e1a12e6f26b94832804282a2c76f2160ef5e024d392248cad3ba280beded3"} Jan 24 07:15:36 crc kubenswrapper[4675]: I0124 07:15:36.232055 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.232034042 podStartE2EDuration="2.232034042s" podCreationTimestamp="2026-01-24 07:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:15:36.228590859 +0000 UTC m=+1337.524696112" watchObservedRunningTime="2026-01-24 07:15:36.232034042 +0000 UTC m=+1337.528139265" Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.603553 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.629775 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:15:38 crc kubenswrapper[4675]: I0124 07:15:38.629854 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:15:39 crc kubenswrapper[4675]: I0124 07:15:39.835025 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:15:39 crc kubenswrapper[4675]: I0124 07:15:39.836162 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 24 07:15:41 crc kubenswrapper[4675]: I0124 07:15:41.588890 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:41 crc kubenswrapper[4675]: I0124 07:15:41.589655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 24 07:15:42 crc kubenswrapper[4675]: I0124 07:15:42.601867 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95a0d5c5-541f-4a43-9d20-22264dca21d1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:42 crc kubenswrapper[4675]: I0124 07:15:42.601923 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="95a0d5c5-541f-4a43-9d20-22264dca21d1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:43 crc kubenswrapper[4675]: I0124 07:15:43.603564 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 24 07:15:43 crc kubenswrapper[4675]: I0124 07:15:43.642292 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.328657 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.834769 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:15:44 crc kubenswrapper[4675]: I0124 07:15:44.834811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 24 07:15:45 crc kubenswrapper[4675]: I0124 07:15:45.847957 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d55e1385-c016-4bb9-afc2-a070f5a88241" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:45 crc kubenswrapper[4675]: I0124 07:15:45.847965 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d55e1385-c016-4bb9-afc2-a070f5a88241" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.595376 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.596336 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.596487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:51 crc kubenswrapper[4675]: I0124 07:15:51.600900 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:52 crc kubenswrapper[4675]: I0124 07:15:52.367244 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 24 07:15:52 crc kubenswrapper[4675]: I0124 07:15:52.375692 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.841757 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.844596 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 24 07:15:54 crc kubenswrapper[4675]: I0124 07:15:54.849986 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:55 crc kubenswrapper[4675]: I0124 07:15:55.404680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 24 07:15:57 crc kubenswrapper[4675]: I0124 07:15:57.495120 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 24 07:16:07 crc kubenswrapper[4675]: I0124 07:16:07.888764 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.629845 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.630215 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.630264 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.631682 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:16:08 crc kubenswrapper[4675]: I0124 07:16:08.631771 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" gracePeriod=600 Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.521638 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" exitCode=0 Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.521970 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0"} Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.522117 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.522139 4675 scope.go:117] "RemoveContainer" containerID="ccc264da54b5f1cadbac5cdeddfb0468de5e9dc08fb8953998ed833d79a9f49c" Jan 24 07:16:09 crc kubenswrapper[4675]: I0124 07:16:09.599524 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:12 crc kubenswrapper[4675]: I0124 07:16:12.592508 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" containerID="cri-o://0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" gracePeriod=604796 Jan 24 07:16:14 crc kubenswrapper[4675]: I0124 07:16:14.207540 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" containerID="cri-o://8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" gracePeriod=604796 Jan 24 07:16:15 crc kubenswrapper[4675]: I0124 07:16:15.132106 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 24 07:16:15 crc kubenswrapper[4675]: I0124 07:16:15.531757 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.266534 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.376548 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.420531 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.420897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421169 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421278 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421488 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421597 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421686 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421829 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.421963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") pod \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\" (UID: \"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c\") " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.422556 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.423019 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.424040 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.437592 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z" (OuterVolumeSpecName: "kube-api-access-2qg6z") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "kube-api-access-2qg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.438616 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.439654 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info" (OuterVolumeSpecName: "pod-info") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.446527 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.515587 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545624 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545664 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545686 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545701 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qg6z\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-kube-api-access-2qg6z\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545738 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.545780 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.551845 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.648249 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.655614 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data" (OuterVolumeSpecName: "config-data") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.656524 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730278 4675 generic.go:334] "Generic (PLEG): container finished" podID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" exitCode=0 Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730340 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730384 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c","Type":"ContainerDied","Data":"dc6f572e1e59798884630905a0aa55c9e501f7fef5df41864f737d4d70bd2321"} Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730402 4675 scope.go:117] "RemoveContainer" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.730563 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.738581 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf" (OuterVolumeSpecName: "server-conf") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750468 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750493 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.750503 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.754953 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" (UID: "ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.849694 4675 scope.go:117] "RemoveContainer" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.851566 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.887177 4675 scope.go:117] "RemoveContainer" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.888966 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": container with ID starting with 0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075 not found: ID does not exist" containerID="0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889002 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075"} err="failed to get container status \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": rpc error: code = NotFound desc = could not find container \"0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075\": container with ID starting with 0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075 not found: ID does not exist" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889023 4675 scope.go:117] "RemoveContainer" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: E0124 07:16:19.889403 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": container with ID starting with 3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a not found: ID does not exist" containerID="3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a" Jan 24 07:16:19 crc kubenswrapper[4675]: I0124 07:16:19.889426 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a"} err="failed to get container status \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": rpc error: code = NotFound desc = could not find container \"3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a\": container with ID starting with 3fe13a35d7f45b326efd4ce29a38684165caf8a07d36b42a43a0a4f5a145955a not found: ID does not exist" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.105231 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.116454 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.152667 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: E0124 07:16:20.153054 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="setup-container" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153070 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="setup-container" Jan 24 07:16:20 crc kubenswrapper[4675]: E0124 07:16:20.153100 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153110 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.153291 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" containerName="rabbitmq" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.154261 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156082 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156602 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.156777 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.157787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.157903 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.158325 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnfwj" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.160352 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.189338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.262918 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263001 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263047 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263140 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263292 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263453 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263525 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263583 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.263697 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.365037 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366003 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.365439 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366045 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366076 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366101 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366137 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366168 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366254 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366272 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.366649 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.367168 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-config-data\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.367206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.368038 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.368216 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3fd85775-321f-4647-95b6-773ec82811e0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.371648 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.372563 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3fd85775-321f-4647-95b6-773ec82811e0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.373815 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3fd85775-321f-4647-95b6-773ec82811e0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.375436 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.386206 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxb7s\" (UniqueName: \"kubernetes.io/projected/3fd85775-321f-4647-95b6-773ec82811e0-kube-api-access-wxb7s\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.419413 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"3fd85775-321f-4647-95b6-773ec82811e0\") " pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.473447 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.768825 4675 generic.go:334] "Generic (PLEG): container finished" podID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerID="8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" exitCode=0 Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.769189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668"} Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.803256 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.874560 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875001 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875030 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875155 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875198 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875330 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875393 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875459 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875511 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.875553 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") pod \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\" (UID: \"50ed4c9b-a365-46aa-95d7-7be5d2cc354a\") " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.877918 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.878199 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.878544 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.884768 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info" (OuterVolumeSpecName: "pod-info") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.884873 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm" (OuterVolumeSpecName: "kube-api-access-chsxm") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "kube-api-access-chsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.887272 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.889135 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.894358 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.935832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data" (OuterVolumeSpecName: "config-data") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.948018 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf" (OuterVolumeSpecName: "server-conf") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.963044 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c" path="/var/lib/kubelet/pods/ddb8c6e7-7008-4ef9-aa6a-e6c7db1b1d7c/volumes" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977550 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chsxm\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-kube-api-access-chsxm\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977576 4675 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977584 4675 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977593 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977601 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977609 4675 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977617 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977625 4675 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977653 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.977662 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:20 crc kubenswrapper[4675]: I0124 07:16:20.995003 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.046736 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50ed4c9b-a365-46aa-95d7-7be5d2cc354a" (UID: "50ed4c9b-a365-46aa-95d7-7be5d2cc354a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.082392 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.082421 4675 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ed4c9b-a365-46aa-95d7-7be5d2cc354a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.089038 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781409 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ed4c9b-a365-46aa-95d7-7be5d2cc354a","Type":"ContainerDied","Data":"e02cfc39376a20ed79af6aa4a70a95d12cb107645ef263fc4bfe2732893da583"} Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781743 4675 scope.go:117] "RemoveContainer" containerID="8b9f86fab7a581af646c89e80c3c7ca0ce4c63bf71b2b12b42c289f8f5551668" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.781895 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.784754 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"1c0f864913c69fa0df26d8f0e2555bf1eaee8f215d71bad716dd5266c7480278"} Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.819334 4675 scope.go:117] "RemoveContainer" containerID="78ce6643db3a1b1549c4015afb11eee3ac5a9eb412378d961f3105790aac9761" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.827457 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.839804 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.866618 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: E0124 07:16:21.867571 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="setup-container" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.867665 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="setup-container" Jan 24 07:16:21 crc kubenswrapper[4675]: E0124 07:16:21.867758 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.867824 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.868138 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" containerName="rabbitmq" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.885928 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898323 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898409 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898544 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.898778 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.900379 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.900628 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bt874" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.901475 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 24 07:16:21 crc kubenswrapper[4675]: I0124 07:16:21.901670 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006493 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006586 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006620 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006643 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006682 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006754 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.006961 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.007003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.007025 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108172 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108292 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108316 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108345 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108362 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108410 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108424 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108446 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108463 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.108703 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.109527 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.109870 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.110078 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.110290 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.111126 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c146e5e-4709-4401-a5eb-522609573260-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.128756 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.155486 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.185179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.185219 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c146e5e-4709-4401-a5eb-522609573260-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.186199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c146e5e-4709-4401-a5eb-522609573260-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.186666 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdtg\" (UniqueName: \"kubernetes.io/projected/7c146e5e-4709-4401-a5eb-522609573260-kube-api-access-xqdtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"7c146e5e-4709-4401-a5eb-522609573260\") " pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.250915 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.733862 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.735630 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.748998 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.783960 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.818948 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.818992 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819041 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819078 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819118 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819136 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.819161 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.843949 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34"} Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.920516 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921773 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921885 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.921963 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922047 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922214 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.922307 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.923128 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.924262 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.925844 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.926443 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.927114 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.927601 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.965076 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ed4c9b-a365-46aa-95d7-7be5d2cc354a" path="/var/lib/kubelet/pods/50ed4c9b-a365-46aa-95d7-7be5d2cc354a/volumes" Jan 24 07:16:22 crc kubenswrapper[4675]: I0124 07:16:22.969497 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"dnsmasq-dns-d558885bc-9dtm6\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:23 crc kubenswrapper[4675]: I0124 07:16:23.072029 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.531635 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:24 crc kubenswrapper[4675]: W0124 07:16:23.537504 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ac6311b_068f_4d1a_9950_f6ad4143ec44.slice/crio-6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6 WatchSource:0}: Error finding container 6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6: Status 404 returned error can't find the container with id 6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6 Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.853109 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerStarted","Data":"6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:23.854044 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"297d18d0a45c3f1d41808bad90659c6819f8936b9b6cb24a60dda7a0a3cc9c86"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.889124 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd"} Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.894462 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerID="35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424" exitCode=0 Jan 24 07:16:24 crc kubenswrapper[4675]: I0124 07:16:24.896570 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424"} Jan 24 07:16:25 crc kubenswrapper[4675]: I0124 07:16:25.913026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerStarted","Data":"0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b"} Jan 24 07:16:25 crc kubenswrapper[4675]: I0124 07:16:25.946338 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" podStartSLOduration=3.946319259 podStartE2EDuration="3.946319259s" podCreationTimestamp="2026-01-24 07:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:25.933642611 +0000 UTC m=+1387.229747874" watchObservedRunningTime="2026-01-24 07:16:25.946319259 +0000 UTC m=+1387.242424492" Jan 24 07:16:26 crc kubenswrapper[4675]: I0124 07:16:26.922569 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:29 crc kubenswrapper[4675]: E0124 07:16:29.579516 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.074027 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.184427 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.184911 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" containerID="cri-o://0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" gracePeriod=10 Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.355091 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.357252 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375010 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375106 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375159 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375256 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375328 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375347 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.375366 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.378462 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.392091 4675 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.203:5353: connect: connection refused" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477191 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477290 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477309 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477375 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477419 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.477436 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478636 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-config\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478820 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478854 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.478640 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.479212 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.485386 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4ca579-5173-42d0-8dd8-d287df832c44-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.504202 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlcv\" (UniqueName: \"kubernetes.io/projected/4a4ca579-5173-42d0-8dd8-d287df832c44-kube-api-access-hqlcv\") pod \"dnsmasq-dns-6b6dc74c5-4qjxm\" (UID: \"4a4ca579-5173-42d0-8dd8-d287df832c44\") " pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:33 crc kubenswrapper[4675]: I0124 07:16:33.682285 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.017586 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerID="0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" exitCode=0 Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.018006 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480"} Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.144479 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-4qjxm"] Jan 24 07:16:34 crc kubenswrapper[4675]: W0124 07:16:34.145371 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a4ca579_5173_42d0_8dd8_d287df832c44.slice/crio-a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4 WatchSource:0}: Error finding container a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4: Status 404 returned error can't find the container with id a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4 Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.183101 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.189896 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190009 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190090 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190172 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190218 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.190246 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") pod \"bc9f2853-f671-4647-81df-50314ca5e8a1\" (UID: \"bc9f2853-f671-4647-81df-50314ca5e8a1\") " Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.196883 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t" (OuterVolumeSpecName: "kube-api-access-f5l2t") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "kube-api-access-f5l2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.298629 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5l2t\" (UniqueName: \"kubernetes.io/projected/bc9f2853-f671-4647-81df-50314ca5e8a1-kube-api-access-f5l2t\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.330474 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.334139 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config" (OuterVolumeSpecName: "config") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.345856 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.354117 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.375015 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc9f2853-f671-4647-81df-50314ca5e8a1" (UID: "bc9f2853-f671-4647-81df-50314ca5e8a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401114 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401167 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401182 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401196 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:34 crc kubenswrapper[4675]: I0124 07:16:34.401207 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9f2853-f671-4647-81df-50314ca5e8a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029389 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" event={"ID":"bc9f2853-f671-4647-81df-50314ca5e8a1","Type":"ContainerDied","Data":"2c3c2a43e5e1f891dc078496766b3dbc527e0916a446f16d71f3f1e737ccce2c"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029446 4675 scope.go:117] "RemoveContainer" containerID="0ab4fa2df75231345106926fff79be99c6a1cf266a2f4e1ca9da801dcc25d480" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.029592 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-2vwtf" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034009 4675 generic.go:334] "Generic (PLEG): container finished" podID="4a4ca579-5173-42d0-8dd8-d287df832c44" containerID="86498a3b6443293315f6b8f373687f3d56f3fc2befa733b480af124f6d0671bb" exitCode=0 Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerDied","Data":"86498a3b6443293315f6b8f373687f3d56f3fc2befa733b480af124f6d0671bb"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.034074 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerStarted","Data":"a2f8d59d2bd2cea71d16e9c0b3c82e739dcaff8f41b9d02a609075bcebcb7ce4"} Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.062053 4675 scope.go:117] "RemoveContainer" containerID="e61aa274860730298b3d466a37bbb7b9f9970e99b80ea9db136fc24849710d8e" Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.062447 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:35 crc kubenswrapper[4675]: I0124 07:16:35.073710 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-2vwtf"] Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.046016 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" event={"ID":"4a4ca579-5173-42d0-8dd8-d287df832c44","Type":"ContainerStarted","Data":"12f3d450ef2852cdfe84daedbfafdbc1c3a0046155981e95040be8a749a24c4f"} Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.046674 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.068183 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" podStartSLOduration=3.068162574 podStartE2EDuration="3.068162574s" podCreationTimestamp="2026-01-24 07:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:36.065790637 +0000 UTC m=+1397.361895860" watchObservedRunningTime="2026-01-24 07:16:36.068162574 +0000 UTC m=+1397.364267807" Jan 24 07:16:36 crc kubenswrapper[4675]: I0124 07:16:36.951393 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" path="/var/lib/kubelet/pods/bc9f2853-f671-4647-81df-50314ca5e8a1/volumes" Jan 24 07:16:39 crc kubenswrapper[4675]: E0124 07:16:39.895004 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.683966 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-4qjxm" Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.746483 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:43 crc kubenswrapper[4675]: I0124 07:16:43.746806 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" containerID="cri-o://0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" gracePeriod=10 Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.134594 4675 generic.go:334] "Generic (PLEG): container finished" podID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerID="0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" exitCode=0 Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.134688 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b"} Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.251277 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.393741 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.393922 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394234 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394384 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394451 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.394537 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") pod \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\" (UID: \"0ac6311b-068f-4d1a-9950-f6ad4143ec44\") " Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.421012 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb" (OuterVolumeSpecName: "kube-api-access-9ztwb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "kube-api-access-9ztwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.452862 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.469422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.469751 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config" (OuterVolumeSpecName: "config") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.470642 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497249 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztwb\" (UniqueName: \"kubernetes.io/projected/0ac6311b-068f-4d1a-9950-f6ad4143ec44-kube-api-access-9ztwb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497307 4675 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497317 4675 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497325 4675 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.497335 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.501316 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.511248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ac6311b-068f-4d1a-9950-f6ad4143ec44" (UID: "0ac6311b-068f-4d1a-9950-f6ad4143ec44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.599263 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:44 crc kubenswrapper[4675]: I0124 07:16:44.599692 4675 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ac6311b-068f-4d1a-9950-f6ad4143ec44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145376 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" event={"ID":"0ac6311b-068f-4d1a-9950-f6ad4143ec44","Type":"ContainerDied","Data":"6cc48653ebd609e683557268147c418ae47794de31786e47b744c830eff868d6"} Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145438 4675 scope.go:117] "RemoveContainer" containerID="0e1889fd923193c5a66c5d433254b77fcdca08e4541adf5af5835fe87dde6d2b" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.145605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-9dtm6" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.173737 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.176580 4675 scope.go:117] "RemoveContainer" containerID="35d6db473d64d478e0b32272afc04578101f4641988ab25761ac4fafc2485424" Jan 24 07:16:45 crc kubenswrapper[4675]: I0124 07:16:45.191671 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-9dtm6"] Jan 24 07:16:46 crc kubenswrapper[4675]: I0124 07:16:46.954313 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" path="/var/lib/kubelet/pods/0ac6311b-068f-4d1a-9950-f6ad4143ec44/volumes" Jan 24 07:16:50 crc kubenswrapper[4675]: E0124 07:16:50.137368 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:16:55 crc kubenswrapper[4675]: I0124 07:16:55.227136 4675 generic.go:334] "Generic (PLEG): container finished" podID="3fd85775-321f-4647-95b6-773ec82811e0" containerID="75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34" exitCode=0 Jan 24 07:16:55 crc kubenswrapper[4675]: I0124 07:16:55.227700 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerDied","Data":"75f405fdae86dd78c23a70324d2fe9b92658e5ef111d4ed788628deca09cdb34"} Jan 24 07:16:56 crc kubenswrapper[4675]: I0124 07:16:56.255358 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3fd85775-321f-4647-95b6-773ec82811e0","Type":"ContainerStarted","Data":"7c8fcdf3763fe30ff360bf453fb6a0ce2f3b917e5ff553be71c10483b879ccbe"} Jan 24 07:16:56 crc kubenswrapper[4675]: I0124 07:16:56.257460 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.265791 4675 generic.go:334] "Generic (PLEG): container finished" podID="7c146e5e-4709-4401-a5eb-522609573260" containerID="4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd" exitCode=0 Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.265881 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerDied","Data":"4c881fc8ce20aba2f577450cba95424c89b08fbe29954e129e4e033f86adfdfd"} Jan 24 07:16:57 crc kubenswrapper[4675]: I0124 07:16:57.302690 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.302671333 podStartE2EDuration="37.302671333s" podCreationTimestamp="2026-01-24 07:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:56.285998199 +0000 UTC m=+1417.582103502" watchObservedRunningTime="2026-01-24 07:16:57.302671333 +0000 UTC m=+1418.598776556" Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.277373 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7c146e5e-4709-4401-a5eb-522609573260","Type":"ContainerStarted","Data":"19e4eb8abb72ccc8dce2d9807941591415dc8a36308c8e2e50bbe505ff9609f1"} Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.278042 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:16:58 crc kubenswrapper[4675]: I0124 07:16:58.305347 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.305333637 podStartE2EDuration="37.305333637s" podCreationTimestamp="2026-01-24 07:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:16:58.303970794 +0000 UTC m=+1419.600076007" watchObservedRunningTime="2026-01-24 07:16:58.305333637 +0000 UTC m=+1419.601438850" Jan 24 07:17:00 crc kubenswrapper[4675]: E0124 07:17:00.358525 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111128 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111688 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111701 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111738 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111744 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111758 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111764 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="init" Jan 24 07:17:02 crc kubenswrapper[4675]: E0124 07:17:02.111781 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111786 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111941 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac6311b-068f-4d1a-9950-f6ad4143ec44" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.111961 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f2853-f671-4647-81df-50314ca5e8a1" containerName="dnsmasq-dns" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.112499 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.115382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.115957 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.116666 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119003 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119130 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.119275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.123187 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.138342 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221046 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221121 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221158 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.221204 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.226788 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.227136 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.237884 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.241584 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:02 crc kubenswrapper[4675]: I0124 07:17:02.430646 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:03 crc kubenswrapper[4675]: I0124 07:17:03.039981 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q"] Jan 24 07:17:03 crc kubenswrapper[4675]: I0124 07:17:03.335413 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerStarted","Data":"e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324"} Jan 24 07:17:10 crc kubenswrapper[4675]: I0124 07:17:10.477888 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 24 07:17:10 crc kubenswrapper[4675]: E0124 07:17:10.635014 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddb8c6e7_7008_4ef9_aa6a_e6c7db1b1d7c.slice/crio-conmon-0b94540afca390eea5882a63c064f8e35bbe1581b44e19c21137ae27fc177075.scope\": RecentStats: unable to find data in memory cache]" Jan 24 07:17:12 crc kubenswrapper[4675]: I0124 07:17:12.256919 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 24 07:17:15 crc kubenswrapper[4675]: I0124 07:17:15.446887 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerStarted","Data":"5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338"} Jan 24 07:17:15 crc kubenswrapper[4675]: I0124 07:17:15.469847 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" podStartSLOduration=2.038826025 podStartE2EDuration="13.469826637s" podCreationTimestamp="2026-01-24 07:17:02 +0000 UTC" firstStartedPulling="2026-01-24 07:17:03.041690905 +0000 UTC m=+1424.337796128" lastFinishedPulling="2026-01-24 07:17:14.472691517 +0000 UTC m=+1435.768796740" observedRunningTime="2026-01-24 07:17:15.465936483 +0000 UTC m=+1436.762041716" watchObservedRunningTime="2026-01-24 07:17:15.469826637 +0000 UTC m=+1436.765931860" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.076284 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.078501 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.115517 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204664 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204730 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.204827 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306257 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306293 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.306787 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.307019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.331432 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"certified-operators-ffsqt\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.398939 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.548875 4675 generic.go:334] "Generic (PLEG): container finished" podID="774fb762-6506-4e0c-9732-9208f7802057" containerID="5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338" exitCode=0 Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.548918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerDied","Data":"5e2d98b8ecebf9f363f40de1fe27344c08503ba956ccee3d4ce6f6ac032b1338"} Jan 24 07:17:26 crc kubenswrapper[4675]: I0124 07:17:26.929561 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560309 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" exitCode=0 Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560386 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04"} Jan 24 07:17:27 crc kubenswrapper[4675]: I0124 07:17:27.560652 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"772e3cfb34ff48595caf0e6b29650847a506a8da6d2a6bdb2d081faf4a3cd015"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.019092 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.047963 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048311 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048584 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.048656 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") pod \"774fb762-6506-4e0c-9732-9208f7802057\" (UID: \"774fb762-6506-4e0c-9732-9208f7802057\") " Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.055884 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455" (OuterVolumeSpecName: "kube-api-access-sq455") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "kube-api-access-sq455". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.058488 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.077838 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.081115 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory" (OuterVolumeSpecName: "inventory") pod "774fb762-6506-4e0c-9732-9208f7802057" (UID: "774fb762-6506-4e0c-9732-9208f7802057"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151594 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151656 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151675 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/774fb762-6506-4e0c-9732-9208f7802057-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.151690 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq455\" (UniqueName: \"kubernetes.io/projected/774fb762-6506-4e0c-9732-9208f7802057-kube-api-access-sq455\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.578300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.594838 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.596966 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q" event={"ID":"774fb762-6506-4e0c-9732-9208f7802057","Type":"ContainerDied","Data":"e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324"} Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.597002 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3202a37ac290012e4933ff508f6e23ba2f74d0bf8dba51ccf4c15a7dc208324" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.744772 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:28 crc kubenswrapper[4675]: E0124 07:17:28.745536 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.745556 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.745757 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="774fb762-6506-4e0c-9732-9208f7802057" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.746464 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.751567 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752383 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752784 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.752942 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.754909 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778263 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778554 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.778801 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.880731 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.881132 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.881243 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.887016 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.887501 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:28 crc kubenswrapper[4675]: I0124 07:17:28.899366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zd8ln\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:29 crc kubenswrapper[4675]: I0124 07:17:29.074196 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:29 crc kubenswrapper[4675]: I0124 07:17:29.636811 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.615505 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" exitCode=0 Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.615593 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.618776 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerStarted","Data":"453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66"} Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.641018 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.642825 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.685053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716335 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716689 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.716921 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818537 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.818667 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.819144 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.819292 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.840568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"redhat-marketplace-27f4l\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:30 crc kubenswrapper[4675]: I0124 07:17:30.967989 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:31 crc kubenswrapper[4675]: W0124 07:17:31.613742 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64f926c_c118_46ad_80c6_a51e8e235362.slice/crio-4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6 WatchSource:0}: Error finding container 4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6: Status 404 returned error can't find the container with id 4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6 Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.614220 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.629969 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerStarted","Data":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.631767 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerStarted","Data":"9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.633519 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6"} Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.666958 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffsqt" podStartSLOduration=2.172077779 podStartE2EDuration="5.666935989s" podCreationTimestamp="2026-01-24 07:17:26 +0000 UTC" firstStartedPulling="2026-01-24 07:17:27.563956672 +0000 UTC m=+1448.860061905" lastFinishedPulling="2026-01-24 07:17:31.058814892 +0000 UTC m=+1452.354920115" observedRunningTime="2026-01-24 07:17:31.651784022 +0000 UTC m=+1452.947889255" watchObservedRunningTime="2026-01-24 07:17:31.666935989 +0000 UTC m=+1452.963041212" Jan 24 07:17:31 crc kubenswrapper[4675]: I0124 07:17:31.689492 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" podStartSLOduration=2.482506016 podStartE2EDuration="3.689474895s" podCreationTimestamp="2026-01-24 07:17:28 +0000 UTC" firstStartedPulling="2026-01-24 07:17:29.642811124 +0000 UTC m=+1450.938916347" lastFinishedPulling="2026-01-24 07:17:30.849780003 +0000 UTC m=+1452.145885226" observedRunningTime="2026-01-24 07:17:31.675360083 +0000 UTC m=+1452.971465316" watchObservedRunningTime="2026-01-24 07:17:31.689474895 +0000 UTC m=+1452.985580118" Jan 24 07:17:32 crc kubenswrapper[4675]: I0124 07:17:32.647605 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" exitCode=0 Jan 24 07:17:32 crc kubenswrapper[4675]: I0124 07:17:32.647729 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656"} Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.674857 4675 generic.go:334] "Generic (PLEG): container finished" podID="55150857-7da2-4609-84be-9cbaa28141ed" containerID="9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe" exitCode=0 Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.675248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerDied","Data":"9d44c74646a51dd3d2e85a8e39a474b7cca6ee83fc282eea72fa4e3a554243fe"} Jan 24 07:17:34 crc kubenswrapper[4675]: I0124 07:17:34.681660 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} Jan 24 07:17:35 crc kubenswrapper[4675]: I0124 07:17:35.693346 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" exitCode=0 Jan 24 07:17:35 crc kubenswrapper[4675]: I0124 07:17:35.693426 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.084098 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147641 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147766 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.147971 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") pod \"55150857-7da2-4609-84be-9cbaa28141ed\" (UID: \"55150857-7da2-4609-84be-9cbaa28141ed\") " Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.176661 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq" (OuterVolumeSpecName: "kube-api-access-ps5qq") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "kube-api-access-ps5qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.178277 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.189370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory" (OuterVolumeSpecName: "inventory") pod "55150857-7da2-4609-84be-9cbaa28141ed" (UID: "55150857-7da2-4609-84be-9cbaa28141ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250327 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5qq\" (UniqueName: \"kubernetes.io/projected/55150857-7da2-4609-84be-9cbaa28141ed-kube-api-access-ps5qq\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250488 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.250543 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55150857-7da2-4609-84be-9cbaa28141ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.399123 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.399175 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" event={"ID":"55150857-7da2-4609-84be-9cbaa28141ed","Type":"ContainerDied","Data":"453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711376 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453b3ef2649da6a1c50b0dc98d40d38633741b39bc391c1b8a4378bfcbb7db66" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.711429 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zd8ln" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.730159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerStarted","Data":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.756167 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27f4l" podStartSLOduration=3.259693044 podStartE2EDuration="6.756149703s" podCreationTimestamp="2026-01-24 07:17:30 +0000 UTC" firstStartedPulling="2026-01-24 07:17:32.649531817 +0000 UTC m=+1453.945637040" lastFinishedPulling="2026-01-24 07:17:36.145988476 +0000 UTC m=+1457.442093699" observedRunningTime="2026-01-24 07:17:36.754911153 +0000 UTC m=+1458.051016376" watchObservedRunningTime="2026-01-24 07:17:36.756149703 +0000 UTC m=+1458.052254926" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.794278 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:36 crc kubenswrapper[4675]: E0124 07:17:36.794813 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.794838 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.795053 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="55150857-7da2-4609-84be-9cbaa28141ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.795846 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.804349 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.804817 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.805145 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.805185 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.814971 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.860847 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.860932 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.861013 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.861050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962950 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.962981 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.963127 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.969552 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.970564 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.971124 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:36 crc kubenswrapper[4675]: I0124 07:17:36.981069 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.121271 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.453916 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw"] Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.467690 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ffsqt" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" probeResult="failure" output=< Jan 24 07:17:37 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:17:37 crc kubenswrapper[4675]: > Jan 24 07:17:37 crc kubenswrapper[4675]: I0124 07:17:37.757855 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerStarted","Data":"57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037"} Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.768162 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerStarted","Data":"ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79"} Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.793949 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" podStartSLOduration=2.318964092 podStartE2EDuration="2.79392643s" podCreationTimestamp="2026-01-24 07:17:36 +0000 UTC" firstStartedPulling="2026-01-24 07:17:37.460182916 +0000 UTC m=+1458.756288139" lastFinishedPulling="2026-01-24 07:17:37.935145254 +0000 UTC m=+1459.231250477" observedRunningTime="2026-01-24 07:17:38.783487086 +0000 UTC m=+1460.079592299" watchObservedRunningTime="2026-01-24 07:17:38.79392643 +0000 UTC m=+1460.090031653" Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.918513 4675 scope.go:117] "RemoveContainer" containerID="cf93369f45b95439f48ef44ae1c4d7acc85ac8a88c7301daa8df8a93d1811848" Jan 24 07:17:38 crc kubenswrapper[4675]: I0124 07:17:38.950043 4675 scope.go:117] "RemoveContainer" containerID="d4a048de2d3fd4b88b2f20e705ec4ca23a40e930bd260b2ef49c5084f7b87b5b" Jan 24 07:17:39 crc kubenswrapper[4675]: I0124 07:17:39.006896 4675 scope.go:117] "RemoveContainer" containerID="874bbdad57146cc137ef4243de0a7736d7fb10ae05c52ce16c88dd3f2052c38a" Jan 24 07:17:40 crc kubenswrapper[4675]: I0124 07:17:40.969269 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:40 crc kubenswrapper[4675]: I0124 07:17:40.969758 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.024878 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.851487 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:41 crc kubenswrapper[4675]: I0124 07:17:41.897963 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:43 crc kubenswrapper[4675]: I0124 07:17:43.809539 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27f4l" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" containerID="cri-o://43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" gracePeriod=2 Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.283873 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416106 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416163 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.416371 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") pod \"f64f926c-c118-46ad-80c6-a51e8e235362\" (UID: \"f64f926c-c118-46ad-80c6-a51e8e235362\") " Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.421350 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities" (OuterVolumeSpecName: "utilities") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.421818 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs" (OuterVolumeSpecName: "kube-api-access-wnjgs") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "kube-api-access-wnjgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.447098 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f64f926c-c118-46ad-80c6-a51e8e235362" (UID: "f64f926c-c118-46ad-80c6-a51e8e235362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519475 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519524 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnjgs\" (UniqueName: \"kubernetes.io/projected/f64f926c-c118-46ad-80c6-a51e8e235362-kube-api-access-wnjgs\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.519543 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64f926c-c118-46ad-80c6-a51e8e235362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827286 4675 generic.go:334] "Generic (PLEG): container finished" podID="f64f926c-c118-46ad-80c6-a51e8e235362" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" exitCode=0 Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827369 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.827386 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27f4l" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.828763 4675 scope.go:117] "RemoveContainer" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.829800 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27f4l" event={"ID":"f64f926c-c118-46ad-80c6-a51e8e235362","Type":"ContainerDied","Data":"4f762e7703f2c35c06e17874ec87652c7686051216a527c399a2b0350fb4dbf6"} Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.864586 4675 scope.go:117] "RemoveContainer" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.892146 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.892396 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27f4l"] Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.915109 4675 scope.go:117] "RemoveContainer" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.944920 4675 scope.go:117] "RemoveContainer" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.945834 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": container with ID starting with 43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652 not found: ID does not exist" containerID="43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.945875 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652"} err="failed to get container status \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": rpc error: code = NotFound desc = could not find container \"43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652\": container with ID starting with 43a1fa656c8e646cd6a9298639c5fbdd62e0c78d47c7a049f3b1548a112b2652 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.945899 4675 scope.go:117] "RemoveContainer" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.946413 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": container with ID starting with dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479 not found: ID does not exist" containerID="dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.946471 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479"} err="failed to get container status \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": rpc error: code = NotFound desc = could not find container \"dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479\": container with ID starting with dae6775dfbc0dcc054d10547744e67ad5f1d0004364bf4b7b0f554e58028e479 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.946500 4675 scope.go:117] "RemoveContainer" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: E0124 07:17:44.946899 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": container with ID starting with 6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656 not found: ID does not exist" containerID="6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.947140 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656"} err="failed to get container status \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": rpc error: code = NotFound desc = could not find container \"6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656\": container with ID starting with 6847bd1ed1b8532db0b255293a006710decef2ae31c6a7a60a2d052f186a3656 not found: ID does not exist" Jan 24 07:17:44 crc kubenswrapper[4675]: I0124 07:17:44.957538 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" path="/var/lib/kubelet/pods/f64f926c-c118-46ad-80c6-a51e8e235362/volumes" Jan 24 07:17:46 crc kubenswrapper[4675]: I0124 07:17:46.447655 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:46 crc kubenswrapper[4675]: I0124 07:17:46.498742 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:47 crc kubenswrapper[4675]: I0124 07:17:47.659087 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:47 crc kubenswrapper[4675]: I0124 07:17:47.852439 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffsqt" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" containerID="cri-o://1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" gracePeriod=2 Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.309618 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393124 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393233 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.393331 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") pod \"215a25e3-9603-4b01-b578-c5f6883fd589\" (UID: \"215a25e3-9603-4b01-b578-c5f6883fd589\") " Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.394928 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities" (OuterVolumeSpecName: "utilities") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.401365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q" (OuterVolumeSpecName: "kube-api-access-zvx8q") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "kube-api-access-zvx8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.436118 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "215a25e3-9603-4b01-b578-c5f6883fd589" (UID: "215a25e3-9603-4b01-b578-c5f6883fd589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495276 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvx8q\" (UniqueName: \"kubernetes.io/projected/215a25e3-9603-4b01-b578-c5f6883fd589-kube-api-access-zvx8q\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495304 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.495314 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a25e3-9603-4b01-b578-c5f6883fd589-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863091 4675 generic.go:334] "Generic (PLEG): container finished" podID="215a25e3-9603-4b01-b578-c5f6883fd589" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" exitCode=0 Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863134 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863161 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffsqt" event={"ID":"215a25e3-9603-4b01-b578-c5f6883fd589","Type":"ContainerDied","Data":"772e3cfb34ff48595caf0e6b29650847a506a8da6d2a6bdb2d081faf4a3cd015"} Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863179 4675 scope.go:117] "RemoveContainer" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.863301 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffsqt" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.889741 4675 scope.go:117] "RemoveContainer" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.905133 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.910998 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffsqt"] Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.920197 4675 scope.go:117] "RemoveContainer" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.961536 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" path="/var/lib/kubelet/pods/215a25e3-9603-4b01-b578-c5f6883fd589/volumes" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.964785 4675 scope.go:117] "RemoveContainer" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.965755 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": container with ID starting with 1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3 not found: ID does not exist" containerID="1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.965802 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3"} err="failed to get container status \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": rpc error: code = NotFound desc = could not find container \"1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3\": container with ID starting with 1fe2c2cecc07e6e0dd274c470f2bcb6a57add4f27a6e0051e22cf8166d7013b3 not found: ID does not exist" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.965830 4675 scope.go:117] "RemoveContainer" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.966230 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": container with ID starting with ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698 not found: ID does not exist" containerID="ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966332 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698"} err="failed to get container status \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": rpc error: code = NotFound desc = could not find container \"ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698\": container with ID starting with ae7b38e4c28c4233b9537122a8dd40e038ba79ed638b16a303ed0c36696da698 not found: ID does not exist" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966377 4675 scope.go:117] "RemoveContainer" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: E0124 07:17:48.966641 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": container with ID starting with 5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04 not found: ID does not exist" containerID="5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04" Jan 24 07:17:48 crc kubenswrapper[4675]: I0124 07:17:48.966667 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04"} err="failed to get container status \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": rpc error: code = NotFound desc = could not find container \"5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04\": container with ID starting with 5a0e8b67edb05961277a7c697143dbac0b77d75c69aa422655c716be21d12f04 not found: ID does not exist" Jan 24 07:18:08 crc kubenswrapper[4675]: I0124 07:18:08.630214 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:18:08 crc kubenswrapper[4675]: I0124 07:18:08.631159 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.906417 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907422 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907444 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907469 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907481 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907504 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907517 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907534 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907546 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907623 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907636 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-content" Jan 24 07:18:10 crc kubenswrapper[4675]: E0124 07:18:10.907657 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907668 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="extract-utilities" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.907969 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64f926c-c118-46ad-80c6-a51e8e235362" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.908014 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a25e3-9603-4b01-b578-c5f6883fd589" containerName="registry-server" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.909953 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.937174 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967755 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967888 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:10 crc kubenswrapper[4675]: I0124 07:18:10.967997 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069300 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069444 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069499 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.069977 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.070072 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.088579 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"community-operators-s7cw6\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.231108 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:11 crc kubenswrapper[4675]: I0124 07:18:11.687751 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.103669 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" exitCode=0 Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.103765 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866"} Jan 24 07:18:12 crc kubenswrapper[4675]: I0124 07:18:12.104823 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"7a32005f4223a80017a570cbb6c1d2140e4463bb0b2d53f0d3bc689f002806a6"} Jan 24 07:18:14 crc kubenswrapper[4675]: I0124 07:18:14.123274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} Jan 24 07:18:15 crc kubenswrapper[4675]: I0124 07:18:15.133469 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" exitCode=0 Jan 24 07:18:15 crc kubenswrapper[4675]: I0124 07:18:15.133558 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} Jan 24 07:18:17 crc kubenswrapper[4675]: I0124 07:18:17.158510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerStarted","Data":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} Jan 24 07:18:17 crc kubenswrapper[4675]: I0124 07:18:17.188064 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s7cw6" podStartSLOduration=2.950308405 podStartE2EDuration="7.188039981s" podCreationTimestamp="2026-01-24 07:18:10 +0000 UTC" firstStartedPulling="2026-01-24 07:18:12.105500699 +0000 UTC m=+1493.401605922" lastFinishedPulling="2026-01-24 07:18:16.343232275 +0000 UTC m=+1497.639337498" observedRunningTime="2026-01-24 07:18:17.182263541 +0000 UTC m=+1498.478368794" watchObservedRunningTime="2026-01-24 07:18:17.188039981 +0000 UTC m=+1498.484145244" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.232214 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.232914 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:21 crc kubenswrapper[4675]: I0124 07:18:21.288412 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:22 crc kubenswrapper[4675]: I0124 07:18:22.289379 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:22 crc kubenswrapper[4675]: I0124 07:18:22.353641 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.221903 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s7cw6" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" containerID="cri-o://8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" gracePeriod=2 Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.674662 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855266 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855322 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.855585 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") pod \"0d407256-826f-449b-bc5d-c7ba87f55424\" (UID: \"0d407256-826f-449b-bc5d-c7ba87f55424\") " Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.856102 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities" (OuterVolumeSpecName: "utilities") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.865528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s" (OuterVolumeSpecName: "kube-api-access-x6t4s") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "kube-api-access-x6t4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.914986 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d407256-826f-449b-bc5d-c7ba87f55424" (UID: "0d407256-826f-449b-bc5d-c7ba87f55424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958609 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958637 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d407256-826f-449b-bc5d-c7ba87f55424-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:24 crc kubenswrapper[4675]: I0124 07:18:24.958646 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6t4s\" (UniqueName: \"kubernetes.io/projected/0d407256-826f-449b-bc5d-c7ba87f55424-kube-api-access-x6t4s\") on node \"crc\" DevicePath \"\"" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245853 4675 generic.go:334] "Generic (PLEG): container finished" podID="0d407256-826f-449b-bc5d-c7ba87f55424" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" exitCode=0 Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245943 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7cw6" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.245935 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.246314 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7cw6" event={"ID":"0d407256-826f-449b-bc5d-c7ba87f55424","Type":"ContainerDied","Data":"7a32005f4223a80017a570cbb6c1d2140e4463bb0b2d53f0d3bc689f002806a6"} Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.246336 4675 scope.go:117] "RemoveContainer" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.294176 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.295243 4675 scope.go:117] "RemoveContainer" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.302409 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s7cw6"] Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.331039 4675 scope.go:117] "RemoveContainer" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.362622 4675 scope.go:117] "RemoveContainer" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363163 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": container with ID starting with 8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8 not found: ID does not exist" containerID="8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363191 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8"} err="failed to get container status \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": rpc error: code = NotFound desc = could not find container \"8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8\": container with ID starting with 8608f6db8e51915fdac39ba705dfb9c07c1a08112d28437dd3c3abbd95dd19a8 not found: ID does not exist" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363210 4675 scope.go:117] "RemoveContainer" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363484 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": container with ID starting with 79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418 not found: ID does not exist" containerID="79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363503 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418"} err="failed to get container status \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": rpc error: code = NotFound desc = could not find container \"79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418\": container with ID starting with 79ccb3d74c5df4c1c7f187391b263622ccb9c62d837498fedd7028144ef47418 not found: ID does not exist" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363517 4675 scope.go:117] "RemoveContainer" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: E0124 07:18:25.363926 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": container with ID starting with 158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866 not found: ID does not exist" containerID="158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866" Jan 24 07:18:25 crc kubenswrapper[4675]: I0124 07:18:25.363952 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866"} err="failed to get container status \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": rpc error: code = NotFound desc = could not find container \"158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866\": container with ID starting with 158f26d99c411a6bf799846fb148a275cca321d5e54bf0d3004524f0a0e1c866 not found: ID does not exist" Jan 24 07:18:26 crc kubenswrapper[4675]: I0124 07:18:26.959188 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" path="/var/lib/kubelet/pods/0d407256-826f-449b-bc5d-c7ba87f55424/volumes" Jan 24 07:18:38 crc kubenswrapper[4675]: I0124 07:18:38.629972 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:18:38 crc kubenswrapper[4675]: I0124 07:18:38.630413 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:18:39 crc kubenswrapper[4675]: I0124 07:18:39.144339 4675 scope.go:117] "RemoveContainer" containerID="6e86539bbdd5da050dd7b36207c60522a769e1e7ac856b3f85e7b51da5db45a6" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.630158 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.630920 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.631001 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.632098 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:19:08 crc kubenswrapper[4675]: I0124 07:19:08.632206 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" gracePeriod=600 Jan 24 07:19:08 crc kubenswrapper[4675]: E0124 07:19:08.755293 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760484 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" exitCode=0 Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760585 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38"} Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.760850 4675 scope.go:117] "RemoveContainer" containerID="c57b46ad673cdfd63921bb6948675e30fb84216d961ca2e82415fb89b85b5df0" Jan 24 07:19:09 crc kubenswrapper[4675]: I0124 07:19:09.761333 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:09 crc kubenswrapper[4675]: E0124 07:19:09.761627 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:21 crc kubenswrapper[4675]: I0124 07:19:21.942849 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:21 crc kubenswrapper[4675]: E0124 07:19:21.944638 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:36 crc kubenswrapper[4675]: I0124 07:19:36.943499 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:36 crc kubenswrapper[4675]: E0124 07:19:36.944393 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:19:47 crc kubenswrapper[4675]: I0124 07:19:47.943328 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:19:47 crc kubenswrapper[4675]: E0124 07:19:47.944363 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:01 crc kubenswrapper[4675]: I0124 07:20:01.943122 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:01 crc kubenswrapper[4675]: E0124 07:20:01.944016 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:13 crc kubenswrapper[4675]: I0124 07:20:13.942965 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:13 crc kubenswrapper[4675]: E0124 07:20:13.943790 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:28 crc kubenswrapper[4675]: I0124 07:20:28.954273 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:28 crc kubenswrapper[4675]: E0124 07:20:28.955299 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:39 crc kubenswrapper[4675]: I0124 07:20:39.258795 4675 scope.go:117] "RemoveContainer" containerID="7f4de3b3644f7a4cb5893a806c2e209a553b8896d0ec64835b19a118ea983566" Jan 24 07:20:39 crc kubenswrapper[4675]: I0124 07:20:39.286985 4675 scope.go:117] "RemoveContainer" containerID="c1ed323221939791011d988310c5e1001dcc2cf9dcc422d083610000da9a42e7" Jan 24 07:20:42 crc kubenswrapper[4675]: I0124 07:20:42.943326 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:42 crc kubenswrapper[4675]: E0124 07:20:42.944281 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:20:53 crc kubenswrapper[4675]: I0124 07:20:53.943222 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:20:53 crc kubenswrapper[4675]: E0124 07:20:53.943914 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:05 crc kubenswrapper[4675]: I0124 07:21:05.943523 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:05 crc kubenswrapper[4675]: E0124 07:21:05.944365 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.050179 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.064091 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.073551 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e5bb-account-create-update-r9xsl"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.082088 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gqpfm"] Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.985570 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147543ec-f687-430c-8a42-547c5861dbf4" path="/var/lib/kubelet/pods/147543ec-f687-430c-8a42-547c5861dbf4/volumes" Jan 24 07:21:08 crc kubenswrapper[4675]: I0124 07:21:08.986963 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade78eac-6799-49f4-b0ea-2f3dcb21273e" path="/var/lib/kubelet/pods/ade78eac-6799-49f4-b0ea-2f3dcb21273e/volumes" Jan 24 07:21:12 crc kubenswrapper[4675]: I0124 07:21:12.960309 4675 generic.go:334] "Generic (PLEG): container finished" podID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerID="ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79" exitCode=0 Jan 24 07:21:12 crc kubenswrapper[4675]: I0124 07:21:12.960414 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerDied","Data":"ce27a8857b567fc180eec8926ea79a194563de9a40fccc96fae87fba64bf0d79"} Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.047124 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.057421 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1ef3-account-create-update-txcmj"] Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.367476 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.491977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492087 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492131 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.492292 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") pod \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\" (UID: \"e9b8f08b-6ece-4b46-86c0-9c353d61c50c\") " Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.498224 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.499310 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl" (OuterVolumeSpecName: "kube-api-access-nb7dl") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "kube-api-access-nb7dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.524248 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory" (OuterVolumeSpecName: "inventory") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.527134 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9b8f08b-6ece-4b46-86c0-9c353d61c50c" (UID: "e9b8f08b-6ece-4b46-86c0-9c353d61c50c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594832 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594891 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594906 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb7dl\" (UniqueName: \"kubernetes.io/projected/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-kube-api-access-nb7dl\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.594917 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b8f08b-6ece-4b46-86c0-9c353d61c50c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.952002 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66b11fd-5bd9-4ba0-bd60-b370a709be63" path="/var/lib/kubelet/pods/f66b11fd-5bd9-4ba0-bd60-b370a709be63/volumes" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982770 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" event={"ID":"e9b8f08b-6ece-4b46-86c0-9c353d61c50c","Type":"ContainerDied","Data":"57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037"} Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982864 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57432d5c7d3510a9357d6e2e9e14bd4d88d3c826de5299149d5d3818512d7037" Jan 24 07:21:14 crc kubenswrapper[4675]: I0124 07:21:14.982785 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.064397 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.078903 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.088815 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.096975 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s7r45"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.104598 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zh8n7"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.112732 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e77-account-create-update-7b985"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.120942 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121465 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-utilities" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121487 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-utilities" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121503 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121511 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121533 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121539 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: E0124 07:21:15.121556 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-content" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121562 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="extract-content" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121775 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b8f08b-6ece-4b46-86c0-9c353d61c50c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.121798 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d407256-826f-449b-bc5d-c7ba87f55424" containerName="registry-server" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.122512 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.124878 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.124950 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128135 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128266 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.128685 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.315967 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.316038 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.317316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419614 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419698 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.419795 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.425390 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.429354 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.443506 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-49lhh\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:15 crc kubenswrapper[4675]: I0124 07:21:15.741282 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.319078 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh"] Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.322894 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.963231 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b2533f-cb15-4581-84c1-81235b34bfe5" path="/var/lib/kubelet/pods/33b2533f-cb15-4581-84c1-81235b34bfe5/volumes" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.964746 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f87016-197d-4a38-94d7-4c7828af8ee3" path="/var/lib/kubelet/pods/45f87016-197d-4a38-94d7-4c7828af8ee3/volumes" Jan 24 07:21:16 crc kubenswrapper[4675]: I0124 07:21:16.965355 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b33dcb3-da61-44f3-9666-2b4afb90b9cd" path="/var/lib/kubelet/pods/5b33dcb3-da61-44f3-9666-2b4afb90b9cd/volumes" Jan 24 07:21:17 crc kubenswrapper[4675]: I0124 07:21:17.004461 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerStarted","Data":"78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194"} Jan 24 07:21:18 crc kubenswrapper[4675]: I0124 07:21:18.016080 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerStarted","Data":"4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982"} Jan 24 07:21:18 crc kubenswrapper[4675]: I0124 07:21:18.033305 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" podStartSLOduration=2.531955308 podStartE2EDuration="3.033288279s" podCreationTimestamp="2026-01-24 07:21:15 +0000 UTC" firstStartedPulling="2026-01-24 07:21:16.322692906 +0000 UTC m=+1677.618798119" lastFinishedPulling="2026-01-24 07:21:16.824025867 +0000 UTC m=+1678.120131090" observedRunningTime="2026-01-24 07:21:18.029920728 +0000 UTC m=+1679.326025961" watchObservedRunningTime="2026-01-24 07:21:18.033288279 +0000 UTC m=+1679.329393512" Jan 24 07:21:19 crc kubenswrapper[4675]: I0124 07:21:19.942678 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:19 crc kubenswrapper[4675]: E0124 07:21:19.943155 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:33 crc kubenswrapper[4675]: I0124 07:21:33.942583 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:33 crc kubenswrapper[4675]: E0124 07:21:33.943290 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:35 crc kubenswrapper[4675]: I0124 07:21:35.036846 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:21:35 crc kubenswrapper[4675]: I0124 07:21:35.045715 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8k7rv"] Jan 24 07:21:36 crc kubenswrapper[4675]: I0124 07:21:36.959787 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c46b58-28e2-4896-8ae5-dc53cbe96ec9" path="/var/lib/kubelet/pods/38c46b58-28e2-4896-8ae5-dc53cbe96ec9/volumes" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.363877 4675 scope.go:117] "RemoveContainer" containerID="9e2bdeffa8a165fe95edf61087a0f3f330b590c20cac1b5409ef71c4c21879df" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.418804 4675 scope.go:117] "RemoveContainer" containerID="2410d88c73d46b104c7a96605edcd69c1a2ae6d7410fac2b2340c43785d9bc0e" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.447257 4675 scope.go:117] "RemoveContainer" containerID="1a52396d2314002bfe722f95ecb36d5eaf563c649851e4153e001371ff49687c" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.497627 4675 scope.go:117] "RemoveContainer" containerID="cb5f5de19b4ad05d5cb260b67a7ffda59880a5be1b09d0c5d743d36c1be22ba3" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.529525 4675 scope.go:117] "RemoveContainer" containerID="f1cbd2804e3c921d0862ddd3c3e25da9a0eb08d8f218d2fcc9340af63efc5b69" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.568257 4675 scope.go:117] "RemoveContainer" containerID="76e5054851b4909dbeb1cd4deac1c991823b0d9876d867ee5c156baf2fa53d30" Jan 24 07:21:39 crc kubenswrapper[4675]: I0124 07:21:39.611501 4675 scope.go:117] "RemoveContainer" containerID="974d7fcdae70428bd478be3b3521612bb5892f56ced9fe76c6f84ebdcecc2fc2" Jan 24 07:21:44 crc kubenswrapper[4675]: I0124 07:21:44.942843 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:44 crc kubenswrapper[4675]: E0124 07:21:44.943679 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.081957 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.091491 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-391e-account-create-update-r55gs"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.104255 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.115289 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.125736 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.138579 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.148735 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6lfkb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.157123 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bbqrz"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.164444 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ffb8-account-create-update-2lngf"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.173009 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4de6-account-create-update-vzw5r"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.182356 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.190850 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5zwrb"] Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.952264 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0a3027-2e26-4258-aaee-a5f0df76fe34" path="/var/lib/kubelet/pods/5e0a3027-2e26-4258-aaee-a5f0df76fe34/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.953366 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e546ec4-3ea8-4140-9238-8d5cdd09e4e9" path="/var/lib/kubelet/pods/8e546ec4-3ea8-4140-9238-8d5cdd09e4e9/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.954349 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6d6162-9f1a-409f-a1aa-87a14a15bf7f" path="/var/lib/kubelet/pods/ab6d6162-9f1a-409f-a1aa-87a14a15bf7f/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.955132 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba347982-6836-4e3f-80c3-ef28ffc5e5cc" path="/var/lib/kubelet/pods/ba347982-6836-4e3f-80c3-ef28ffc5e5cc/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.956513 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce44ec9-1ffb-44d7-bcce-250a1fdf6959" path="/var/lib/kubelet/pods/cce44ec9-1ffb-44d7-bcce-250a1fdf6959/volumes" Jan 24 07:21:46 crc kubenswrapper[4675]: I0124 07:21:46.957290 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f802e166-b89b-4e38-9230-762edc86b32c" path="/var/lib/kubelet/pods/f802e166-b89b-4e38-9230-762edc86b32c/volumes" Jan 24 07:21:53 crc kubenswrapper[4675]: I0124 07:21:53.048927 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:21:53 crc kubenswrapper[4675]: I0124 07:21:53.060503 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ttgww"] Jan 24 07:21:54 crc kubenswrapper[4675]: I0124 07:21:54.952292 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c949a736-b46d-4907-a24d-17f28f4e3f71" path="/var/lib/kubelet/pods/c949a736-b46d-4907-a24d-17f28f4e3f71/volumes" Jan 24 07:21:59 crc kubenswrapper[4675]: I0124 07:21:59.943169 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:21:59 crc kubenswrapper[4675]: E0124 07:21:59.943981 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:11 crc kubenswrapper[4675]: I0124 07:22:11.034335 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:22:11 crc kubenswrapper[4675]: I0124 07:22:11.043202 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-95xkb"] Jan 24 07:22:12 crc kubenswrapper[4675]: I0124 07:22:12.960050 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e53c5a1-6293-46d9-9783-e7d183050152" path="/var/lib/kubelet/pods/7e53c5a1-6293-46d9-9783-e7d183050152/volumes" Jan 24 07:22:13 crc kubenswrapper[4675]: I0124 07:22:13.944139 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:13 crc kubenswrapper[4675]: E0124 07:22:13.945494 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:27 crc kubenswrapper[4675]: I0124 07:22:27.943236 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:27 crc kubenswrapper[4675]: E0124 07:22:27.945152 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:33 crc kubenswrapper[4675]: I0124 07:22:33.037504 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:22:33 crc kubenswrapper[4675]: I0124 07:22:33.046644 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4hsxg"] Jan 24 07:22:34 crc kubenswrapper[4675]: I0124 07:22:34.957150 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871f5758-f078-4271-acb9-e5ca8bfdc2eb" path="/var/lib/kubelet/pods/871f5758-f078-4271-acb9-e5ca8bfdc2eb/volumes" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.749546 4675 scope.go:117] "RemoveContainer" containerID="1cf02099876733db0045ce49593ffbde19db42e4c0d54b5221192666290a2ec9" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.808355 4675 scope.go:117] "RemoveContainer" containerID="face7e5c0b8054d6c99e86c42a7c3b558ca54c06b16b7b249ea8d2239d88036b" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.840652 4675 scope.go:117] "RemoveContainer" containerID="bdb786fea2e5d2877731346b3f673262878acb6d16f62d6d292e1a2d801ca4e0" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.882882 4675 scope.go:117] "RemoveContainer" containerID="fdb88fe5e8d5c3d574f7618a944551c7b762f983498c9ea3e4b037a53bfad902" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.941826 4675 scope.go:117] "RemoveContainer" containerID="c2b0d0fa45b902eb0ffa086ad50d248f34796e32c1a20209565126bead4f77e0" Jan 24 07:22:39 crc kubenswrapper[4675]: I0124 07:22:39.980051 4675 scope.go:117] "RemoveContainer" containerID="2617af6172b0f231078c0676a80fde395fe2ef1163c9fa0791bb89294c2f806c" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.040246 4675 scope.go:117] "RemoveContainer" containerID="03ea54e271ba027af9b3efb600eaf2f980bfc8dab89a53460b77cbbc8373517e" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.063669 4675 scope.go:117] "RemoveContainer" containerID="f2b78394bf1beb82b28dc55cf3863a1ec788f53b7447575aebefd50f08d7bb67" Jan 24 07:22:40 crc kubenswrapper[4675]: I0124 07:22:40.103903 4675 scope.go:117] "RemoveContainer" containerID="6d211dc6ddf9ea6d7e3e8e95b729de63c53d51d2eead6595b62cad41e16dadc4" Jan 24 07:22:42 crc kubenswrapper[4675]: I0124 07:22:42.942183 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:42 crc kubenswrapper[4675]: E0124 07:22:42.942989 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.060714 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.067568 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fp9qw"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.075208 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.083060 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v7kb4"] Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.963348 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01" path="/var/lib/kubelet/pods/5da3fd8e-4f1c-4a68-ae8d-ab0b06193e01/volumes" Jan 24 07:22:48 crc kubenswrapper[4675]: I0124 07:22:48.965474 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54df341-915c-4505-bd2e-81923b07a2be" path="/var/lib/kubelet/pods/f54df341-915c-4505-bd2e-81923b07a2be/volumes" Jan 24 07:22:56 crc kubenswrapper[4675]: I0124 07:22:56.943674 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:22:56 crc kubenswrapper[4675]: E0124 07:22:56.944332 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.458963 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.460686 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.479136 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552135 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.552337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654371 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654754 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.654921 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.655279 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.655456 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.672567 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"redhat-operators-znd7g\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:57 crc kubenswrapper[4675]: I0124 07:22:57.784849 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:22:58 crc kubenswrapper[4675]: I0124 07:22:58.238057 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274130 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" exitCode=0 Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274256 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765"} Jan 24 07:22:59 crc kubenswrapper[4675]: I0124 07:22:59.274395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"bd0b05f223be494afcf36db2c1abefbea4c81384df296c831af9fc3ee3f29310"} Jan 24 07:23:00 crc kubenswrapper[4675]: I0124 07:23:00.284478 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.025049 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.034564 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.044994 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g8f6m"] Jan 24 07:23:03 crc kubenswrapper[4675]: I0124 07:23:03.054709 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-58bxq"] Jan 24 07:23:04 crc kubenswrapper[4675]: I0124 07:23:04.955226 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d590a0d-6c41-407a-8e89-3e7b9a64a3f7" path="/var/lib/kubelet/pods/0d590a0d-6c41-407a-8e89-3e7b9a64a3f7/volumes" Jan 24 07:23:04 crc kubenswrapper[4675]: I0124 07:23:04.958671 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57270c73-9e5a-4629-8c7a-85123438a067" path="/var/lib/kubelet/pods/57270c73-9e5a-4629-8c7a-85123438a067/volumes" Jan 24 07:23:05 crc kubenswrapper[4675]: I0124 07:23:05.348618 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" exitCode=0 Jan 24 07:23:05 crc kubenswrapper[4675]: I0124 07:23:05.348686 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} Jan 24 07:23:06 crc kubenswrapper[4675]: I0124 07:23:06.360762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerStarted","Data":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} Jan 24 07:23:06 crc kubenswrapper[4675]: I0124 07:23:06.388657 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-znd7g" podStartSLOduration=2.549650915 podStartE2EDuration="9.388633103s" podCreationTimestamp="2026-01-24 07:22:57 +0000 UTC" firstStartedPulling="2026-01-24 07:22:59.279063871 +0000 UTC m=+1780.575169094" lastFinishedPulling="2026-01-24 07:23:06.118046059 +0000 UTC m=+1787.414151282" observedRunningTime="2026-01-24 07:23:06.381117881 +0000 UTC m=+1787.677223114" watchObservedRunningTime="2026-01-24 07:23:06.388633103 +0000 UTC m=+1787.684738346" Jan 24 07:23:07 crc kubenswrapper[4675]: I0124 07:23:07.789841 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:07 crc kubenswrapper[4675]: I0124 07:23:07.790245 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:08 crc kubenswrapper[4675]: I0124 07:23:08.835748 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znd7g" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" probeResult="failure" output=< Jan 24 07:23:08 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:23:08 crc kubenswrapper[4675]: > Jan 24 07:23:09 crc kubenswrapper[4675]: I0124 07:23:09.943646 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:09 crc kubenswrapper[4675]: E0124 07:23:09.944208 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:17 crc kubenswrapper[4675]: I0124 07:23:17.842644 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:17 crc kubenswrapper[4675]: I0124 07:23:17.900202 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:18 crc kubenswrapper[4675]: I0124 07:23:18.092474 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:19 crc kubenswrapper[4675]: I0124 07:23:19.474222 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-znd7g" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" containerID="cri-o://164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" gracePeriod=2 Jan 24 07:23:19 crc kubenswrapper[4675]: I0124 07:23:19.952196 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154268 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154539 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.154742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") pod \"744470af-3cf4-4f93-8269-4e579adc0101\" (UID: \"744470af-3cf4-4f93-8269-4e579adc0101\") " Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.155103 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities" (OuterVolumeSpecName: "utilities") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.155333 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.165354 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds" (OuterVolumeSpecName: "kube-api-access-dh2ds") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "kube-api-access-dh2ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.257124 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2ds\" (UniqueName: \"kubernetes.io/projected/744470af-3cf4-4f93-8269-4e579adc0101-kube-api-access-dh2ds\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.331174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "744470af-3cf4-4f93-8269-4e579adc0101" (UID: "744470af-3cf4-4f93-8269-4e579adc0101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.358663 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744470af-3cf4-4f93-8269-4e579adc0101-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486281 4675 generic.go:334] "Generic (PLEG): container finished" podID="744470af-3cf4-4f93-8269-4e579adc0101" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" exitCode=0 Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486382 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znd7g" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.486385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.488315 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znd7g" event={"ID":"744470af-3cf4-4f93-8269-4e579adc0101","Type":"ContainerDied","Data":"bd0b05f223be494afcf36db2c1abefbea4c81384df296c831af9fc3ee3f29310"} Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.488341 4675 scope.go:117] "RemoveContainer" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.510697 4675 scope.go:117] "RemoveContainer" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.548313 4675 scope.go:117] "RemoveContainer" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.586162 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.596065 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-znd7g"] Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.606976 4675 scope.go:117] "RemoveContainer" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.607873 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": container with ID starting with 164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee not found: ID does not exist" containerID="164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608000 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee"} err="failed to get container status \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": rpc error: code = NotFound desc = could not find container \"164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee\": container with ID starting with 164993251a0729204b84f2d1959cdd61e7e5313c6238e7733073d3fb32cee3ee not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608119 4675 scope.go:117] "RemoveContainer" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.608795 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": container with ID starting with e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1 not found: ID does not exist" containerID="e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608863 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1"} err="failed to get container status \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": rpc error: code = NotFound desc = could not find container \"e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1\": container with ID starting with e896c1f02bac28c4e28932f1e71801d049f83ea7290b8316145df3f857fc81d1 not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.608881 4675 scope.go:117] "RemoveContainer" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.609402 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": container with ID starting with a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765 not found: ID does not exist" containerID="a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.609493 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765"} err="failed to get container status \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": rpc error: code = NotFound desc = could not find container \"a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765\": container with ID starting with a71d931f7503c94016e1ccbdad0822882f76ac72918d68616cc2ab476f32f765 not found: ID does not exist" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.943318 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:20 crc kubenswrapper[4675]: E0124 07:23:20.943854 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:20 crc kubenswrapper[4675]: I0124 07:23:20.961204 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744470af-3cf4-4f93-8269-4e579adc0101" path="/var/lib/kubelet/pods/744470af-3cf4-4f93-8269-4e579adc0101/volumes" Jan 24 07:23:31 crc kubenswrapper[4675]: I0124 07:23:31.944132 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:31 crc kubenswrapper[4675]: E0124 07:23:31.945458 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:34 crc kubenswrapper[4675]: I0124 07:23:34.638228 4675 generic.go:334] "Generic (PLEG): container finished" podID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerID="4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982" exitCode=0 Jan 24 07:23:34 crc kubenswrapper[4675]: I0124 07:23:34.638300 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerDied","Data":"4e774ccd4a0c45f76240627c3ed1c01aa5657474b302cffa11c10c2b8e04e982"} Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.092748 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139680 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139848 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.139987 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") pod \"09d123a4-63c4-4269-b4e1-12932baedfd0\" (UID: \"09d123a4-63c4-4269-b4e1-12932baedfd0\") " Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.144633 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d" (OuterVolumeSpecName: "kube-api-access-gd62d") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "kube-api-access-gd62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.168315 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.174577 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory" (OuterVolumeSpecName: "inventory") pod "09d123a4-63c4-4269-b4e1-12932baedfd0" (UID: "09d123a4-63c4-4269-b4e1-12932baedfd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242814 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242861 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd62d\" (UniqueName: \"kubernetes.io/projected/09d123a4-63c4-4269-b4e1-12932baedfd0-kube-api-access-gd62d\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.242877 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09d123a4-63c4-4269-b4e1-12932baedfd0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667022 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" event={"ID":"09d123a4-63c4-4269-b4e1-12932baedfd0","Type":"ContainerDied","Data":"78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194"} Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667069 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78b7fe3eb41435eafa123bc3a0e51de4500cea6ee1cf2c6b836b190ca84df194" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.667132 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-49lhh" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753044 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753476 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753498 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753510 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-content" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753517 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-content" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753543 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-utilities" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753549 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="extract-utilities" Jan 24 07:23:36 crc kubenswrapper[4675]: E0124 07:23:36.753559 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753566 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753741 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d123a4-63c4-4269-b4e1-12932baedfd0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.753767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="744470af-3cf4-4f93-8269-4e579adc0101" containerName="registry-server" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.754476 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.756808 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.757046 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.757231 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.758825 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.782265 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853202 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.853653 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956247 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.956642 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.960064 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.965427 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:36 crc kubenswrapper[4675]: I0124 07:23:36.972916 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-td879\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.076397 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.583539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879"] Jan 24 07:23:37 crc kubenswrapper[4675]: I0124 07:23:37.675151 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerStarted","Data":"3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021"} Jan 24 07:23:38 crc kubenswrapper[4675]: I0124 07:23:38.686017 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerStarted","Data":"493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6"} Jan 24 07:23:38 crc kubenswrapper[4675]: I0124 07:23:38.704701 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" podStartSLOduration=2.098655649 podStartE2EDuration="2.704682615s" podCreationTimestamp="2026-01-24 07:23:36 +0000 UTC" firstStartedPulling="2026-01-24 07:23:37.59160663 +0000 UTC m=+1818.887711843" lastFinishedPulling="2026-01-24 07:23:38.197633586 +0000 UTC m=+1819.493738809" observedRunningTime="2026-01-24 07:23:38.699902059 +0000 UTC m=+1819.996007282" watchObservedRunningTime="2026-01-24 07:23:38.704682615 +0000 UTC m=+1820.000787838" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.279610 4675 scope.go:117] "RemoveContainer" containerID="d2cd62045ebf2fa7b15faa8a57eb1e83b1434d06978bf2c230d6fd80499404d5" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.320168 4675 scope.go:117] "RemoveContainer" containerID="ec7473e1089d8da929e61e3782b155d95dfe82c94964d44704255a4214eea76c" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.386253 4675 scope.go:117] "RemoveContainer" containerID="5fd1de2ade476875bcd3cadbec86fb8450feb391c53de19fcd301aa7061837a8" Jan 24 07:23:40 crc kubenswrapper[4675]: I0124 07:23:40.432799 4675 scope.go:117] "RemoveContainer" containerID="eb86a86e2ea4aab1599d35163fef6b9016931250bfcb0fdf136d0350b3794d53" Jan 24 07:23:42 crc kubenswrapper[4675]: I0124 07:23:42.942147 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:42 crc kubenswrapper[4675]: E0124 07:23:42.942770 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.046624 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.054355 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p847h"] Jan 24 07:23:48 crc kubenswrapper[4675]: I0124 07:23:48.957364 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d4a29e-dbe1-4145-b0af-afa0c77172b9" path="/var/lib/kubelet/pods/d4d4a29e-dbe1-4145-b0af-afa0c77172b9/volumes" Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.037173 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.050490 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.060604 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.074332 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.082385 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.090510 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2gcsv"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.099864 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1e3e-account-create-update-z84p9"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.108703 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4z8kz"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.116198 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-47cc-account-create-update-qbjjs"] Jan 24 07:23:49 crc kubenswrapper[4675]: I0124 07:23:49.125331 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-aab6-account-create-update-4zgt4"] Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.955735 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6ffe68-4ebd-47e8-8b11-20050394e5b7" path="/var/lib/kubelet/pods/9b6ffe68-4ebd-47e8-8b11-20050394e5b7/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.957013 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c962c5e1-a244-4690-935e-9a7b0d5fc7e4" path="/var/lib/kubelet/pods/c962c5e1-a244-4690-935e-9a7b0d5fc7e4/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.957783 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb102798-6f2c-4cf4-b697-03cc94f9174a" path="/var/lib/kubelet/pods/cb102798-6f2c-4cf4-b697-03cc94f9174a/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.958569 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db48a3bd-546d-4f52-a9bc-340e03790730" path="/var/lib/kubelet/pods/db48a3bd-546d-4f52-a9bc-340e03790730/volumes" Jan 24 07:23:50 crc kubenswrapper[4675]: I0124 07:23:50.959988 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8458b8a-6770-4e62-9848-55a9b142cb8c" path="/var/lib/kubelet/pods/f8458b8a-6770-4e62-9848-55a9b142cb8c/volumes" Jan 24 07:23:55 crc kubenswrapper[4675]: I0124 07:23:55.942454 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:23:55 crc kubenswrapper[4675]: E0124 07:23:55.943422 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:24:07 crc kubenswrapper[4675]: I0124 07:24:07.943464 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:24:07 crc kubenswrapper[4675]: E0124 07:24:07.944400 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:24:18 crc kubenswrapper[4675]: I0124 07:24:18.959001 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.040386 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.048060 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvg8g"] Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.083595 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} Jan 24 07:24:20 crc kubenswrapper[4675]: I0124 07:24:20.953779 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827f33c6-ea9f-4312-9533-e952a218f464" path="/var/lib/kubelet/pods/827f33c6-ea9f-4312-9533-e952a218f464/volumes" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.565458 4675 scope.go:117] "RemoveContainer" containerID="692b01412cca7a95c030d0da68618054df44eaf3b20646d9b4064c305a011eb1" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.609259 4675 scope.go:117] "RemoveContainer" containerID="92a6b4b87b9b2ef26a79f73c81ecbeb36fe6ccb8b0e511ab2d00e52dda5c10ce" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.631252 4675 scope.go:117] "RemoveContainer" containerID="ccdf210ec59856c481255445ba67000d722f7daf10b18be409b9884e5bed261a" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.667696 4675 scope.go:117] "RemoveContainer" containerID="e024578d84cf52e29f779949e2955f4eac1d56a123af391ad810ea1674a31648" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.708480 4675 scope.go:117] "RemoveContainer" containerID="7ee7b6faa999fda3d6ec97508bbaca0406687b589e89517642e51b8d024a1a97" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.777322 4675 scope.go:117] "RemoveContainer" containerID="13922ccdb386ccdef5ac3f7ca81cf15c2217528fedc1c377893db26450c6489d" Jan 24 07:24:40 crc kubenswrapper[4675]: I0124 07:24:40.815541 4675 scope.go:117] "RemoveContainer" containerID="dbf84230f864bc19464817aff5d36347fe8a661c6ce91661b718a8bbd234e6b5" Jan 24 07:24:45 crc kubenswrapper[4675]: I0124 07:24:45.043953 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:24:45 crc kubenswrapper[4675]: I0124 07:24:45.055481 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5bzdh"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.045500 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.055044 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t2bwz"] Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.958997 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284" path="/var/lib/kubelet/pods/5ca0e0be-0ebb-4e2c-bdbe-aadea03ed284/volumes" Jan 24 07:24:46 crc kubenswrapper[4675]: I0124 07:24:46.960318 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1819bfe-22cc-4ead-8e81-717ee70b2e83" path="/var/lib/kubelet/pods/a1819bfe-22cc-4ead-8e81-717ee70b2e83/volumes" Jan 24 07:25:07 crc kubenswrapper[4675]: I0124 07:25:07.501371 4675 generic.go:334] "Generic (PLEG): container finished" podID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerID="493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6" exitCode=0 Jan 24 07:25:07 crc kubenswrapper[4675]: I0124 07:25:07.501467 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerDied","Data":"493e076588b3025598ca6ab35ffda470c220488fd356f1398e842589774ea9b6"} Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.886413 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968269 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968383 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.968479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") pod \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\" (UID: \"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3\") " Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.974617 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf" (OuterVolumeSpecName: "kube-api-access-hfxrf") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "kube-api-access-hfxrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:25:08 crc kubenswrapper[4675]: I0124 07:25:08.995207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory" (OuterVolumeSpecName: "inventory") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.002232 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" (UID: "bc52fac9-92d8-4555-b942-5f0dcb4bf6f3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070686 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfxrf\" (UniqueName: \"kubernetes.io/projected/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-kube-api-access-hfxrf\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070714 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.070735 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc52fac9-92d8-4555-b942-5f0dcb4bf6f3-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.526980 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" event={"ID":"bc52fac9-92d8-4555-b942-5f0dcb4bf6f3","Type":"ContainerDied","Data":"3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021"} Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.527039 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b993624ad3fbf948909e7968e38338ba99068f586554ea9b4c566880a979021" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.527152 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-td879" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.661670 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:09 crc kubenswrapper[4675]: E0124 07:25:09.663345 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.663394 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.663619 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc52fac9-92d8-4555-b942-5f0dcb4bf6f3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.664774 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673005 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673064 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673191 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.673370 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.697399 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705413 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705504 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.705576 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.806997 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.807342 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.807476 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.811348 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.811743 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.822280 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:09 crc kubenswrapper[4675]: I0124 07:25:09.995138 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:10 crc kubenswrapper[4675]: I0124 07:25:10.563165 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc"] Jan 24 07:25:10 crc kubenswrapper[4675]: W0124 07:25:10.573247 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9c128cc_910c_4ef2_9b56_14adf4d264b3.slice/crio-0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05 WatchSource:0}: Error finding container 0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05: Status 404 returned error can't find the container with id 0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05 Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.549510 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerStarted","Data":"f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b"} Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.550169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerStarted","Data":"0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05"} Jan 24 07:25:11 crc kubenswrapper[4675]: I0124 07:25:11.574750 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" podStartSLOduration=1.981732667 podStartE2EDuration="2.574702781s" podCreationTimestamp="2026-01-24 07:25:09 +0000 UTC" firstStartedPulling="2026-01-24 07:25:10.57628705 +0000 UTC m=+1911.872392273" lastFinishedPulling="2026-01-24 07:25:11.169257164 +0000 UTC m=+1912.465362387" observedRunningTime="2026-01-24 07:25:11.568360556 +0000 UTC m=+1912.864465789" watchObservedRunningTime="2026-01-24 07:25:11.574702781 +0000 UTC m=+1912.870808024" Jan 24 07:25:18 crc kubenswrapper[4675]: I0124 07:25:18.606561 4675 generic.go:334] "Generic (PLEG): container finished" podID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerID="f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b" exitCode=0 Jan 24 07:25:18 crc kubenswrapper[4675]: I0124 07:25:18.607114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerDied","Data":"f76dc390b199552873d6fce988d31a984953181f0421729e74dbff18258c271b"} Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.038786 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.102913 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.102976 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.103092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") pod \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\" (UID: \"e9c128cc-910c-4ef2-9b56-14adf4d264b3\") " Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.118896 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh" (OuterVolumeSpecName: "kube-api-access-5wbmh") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "kube-api-access-5wbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.132450 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.134780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory" (OuterVolumeSpecName: "inventory") pod "e9c128cc-910c-4ef2-9b56-14adf4d264b3" (UID: "e9c128cc-910c-4ef2-9b56-14adf4d264b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.204996 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.205030 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbmh\" (UniqueName: \"kubernetes.io/projected/e9c128cc-910c-4ef2-9b56-14adf4d264b3-kube-api-access-5wbmh\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.205045 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9c128cc-910c-4ef2-9b56-14adf4d264b3-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624063 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" event={"ID":"e9c128cc-910c-4ef2-9b56-14adf4d264b3","Type":"ContainerDied","Data":"0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05"} Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624108 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf206a882066b29f1df3738d0e01d0d4bc1dc6db08d64d153e7b882e34c9d05" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.624126 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713248 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:20 crc kubenswrapper[4675]: E0124 07:25:20.713656 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713677 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.713880 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c128cc-910c-4ef2-9b56-14adf4d264b3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.714564 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.717890 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.718131 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.718391 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.719382 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.724615 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.814683 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.814941 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.815209 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916647 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916831 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.916882 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.921776 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.922484 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:20 crc kubenswrapper[4675]: I0124 07:25:20.936969 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vbvgv\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.071310 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.602526 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv"] Jan 24 07:25:21 crc kubenswrapper[4675]: I0124 07:25:21.634027 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerStarted","Data":"ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd"} Jan 24 07:25:23 crc kubenswrapper[4675]: I0124 07:25:23.651796 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerStarted","Data":"751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a"} Jan 24 07:25:23 crc kubenswrapper[4675]: I0124 07:25:23.684235 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" podStartSLOduration=2.6262152309999998 podStartE2EDuration="3.684210898s" podCreationTimestamp="2026-01-24 07:25:20 +0000 UTC" firstStartedPulling="2026-01-24 07:25:21.601702327 +0000 UTC m=+1922.897807550" lastFinishedPulling="2026-01-24 07:25:22.659697994 +0000 UTC m=+1923.955803217" observedRunningTime="2026-01-24 07:25:23.675299561 +0000 UTC m=+1924.971404794" watchObservedRunningTime="2026-01-24 07:25:23.684210898 +0000 UTC m=+1924.980316141" Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.042686 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.049136 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dxv2k"] Jan 24 07:25:30 crc kubenswrapper[4675]: I0124 07:25:30.953899 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0aa104-48a4-4eab-afcc-2ef03d860551" path="/var/lib/kubelet/pods/cd0aa104-48a4-4eab-afcc-2ef03d860551/volumes" Jan 24 07:25:40 crc kubenswrapper[4675]: I0124 07:25:40.965020 4675 scope.go:117] "RemoveContainer" containerID="78f8083eacc7c22ec9dea19e2beb1b5b4e3cc8fc1e0078f1f2502d6499fe0c24" Jan 24 07:25:41 crc kubenswrapper[4675]: I0124 07:25:41.035081 4675 scope.go:117] "RemoveContainer" containerID="d284df73b7fd149e40dd2e61a4921f972d1ee1af66e5595a151269eb977744e4" Jan 24 07:25:41 crc kubenswrapper[4675]: I0124 07:25:41.094790 4675 scope.go:117] "RemoveContainer" containerID="28e02e05a169961e6a8905b7cf18ce1c42ea2b78ddd06aee7b4a61c2126390af" Jan 24 07:26:09 crc kubenswrapper[4675]: I0124 07:26:09.074252 4675 generic.go:334] "Generic (PLEG): container finished" podID="27ad7637-701b-43e1-8440-0fd32522fc56" containerID="751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a" exitCode=0 Jan 24 07:26:09 crc kubenswrapper[4675]: I0124 07:26:09.074342 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerDied","Data":"751118413c1e3ed7487377a3f697c22e66689c703a60f6fb3ee8356ce52f410a"} Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.617245 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.701886 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.701960 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.702017 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") pod \"27ad7637-701b-43e1-8440-0fd32522fc56\" (UID: \"27ad7637-701b-43e1-8440-0fd32522fc56\") " Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.716704 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd" (OuterVolumeSpecName: "kube-api-access-7szzd") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "kube-api-access-7szzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.740709 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.762890 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory" (OuterVolumeSpecName: "inventory") pod "27ad7637-701b-43e1-8440-0fd32522fc56" (UID: "27ad7637-701b-43e1-8440-0fd32522fc56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806548 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806585 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27ad7637-701b-43e1-8440-0fd32522fc56-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:10 crc kubenswrapper[4675]: I0124 07:26:10.806599 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7szzd\" (UniqueName: \"kubernetes.io/projected/27ad7637-701b-43e1-8440-0fd32522fc56-kube-api-access-7szzd\") on node \"crc\" DevicePath \"\"" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095543 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" event={"ID":"27ad7637-701b-43e1-8440-0fd32522fc56","Type":"ContainerDied","Data":"ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd"} Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095604 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0545c2af86b7f80e12ad50cc6fe9ae7dbf0381beb86e961ddc73a301493bfd" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.095626 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vbvgv" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.232497 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:11 crc kubenswrapper[4675]: E0124 07:26:11.232912 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.232931 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.233081 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ad7637-701b-43e1-8440-0fd32522fc56" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.233682 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240065 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.240464 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.242609 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.243924 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317072 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317182 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.317249 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.419693 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.420186 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.420368 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.425989 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.427036 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.436712 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:11 crc kubenswrapper[4675]: I0124 07:26:11.555533 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:26:12 crc kubenswrapper[4675]: I0124 07:26:12.091923 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm"] Jan 24 07:26:12 crc kubenswrapper[4675]: I0124 07:26:12.105592 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerStarted","Data":"a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2"} Jan 24 07:26:13 crc kubenswrapper[4675]: I0124 07:26:13.114411 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerStarted","Data":"282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f"} Jan 24 07:26:13 crc kubenswrapper[4675]: I0124 07:26:13.136049 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" podStartSLOduration=1.54995397 podStartE2EDuration="2.136025057s" podCreationTimestamp="2026-01-24 07:26:11 +0000 UTC" firstStartedPulling="2026-01-24 07:26:12.092466541 +0000 UTC m=+1973.388571764" lastFinishedPulling="2026-01-24 07:26:12.678537618 +0000 UTC m=+1973.974642851" observedRunningTime="2026-01-24 07:26:13.128386851 +0000 UTC m=+1974.424492074" watchObservedRunningTime="2026-01-24 07:26:13.136025057 +0000 UTC m=+1974.432130280" Jan 24 07:26:38 crc kubenswrapper[4675]: I0124 07:26:38.629919 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:26:38 crc kubenswrapper[4675]: I0124 07:26:38.630480 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:08 crc kubenswrapper[4675]: I0124 07:27:08.630018 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:27:08 crc kubenswrapper[4675]: I0124 07:27:08.630466 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:15 crc kubenswrapper[4675]: I0124 07:27:15.664633 4675 generic.go:334] "Generic (PLEG): container finished" podID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerID="282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f" exitCode=0 Jan 24 07:27:15 crc kubenswrapper[4675]: I0124 07:27:15.665005 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerDied","Data":"282c4b7e726d12b8ea26b87660396ed2ec7d6cd2371b1a805c4ecd4f72af3c0f"} Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.210337 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.320949 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.321041 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.321139 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") pod \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\" (UID: \"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f\") " Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.332355 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp" (OuterVolumeSpecName: "kube-api-access-tpgxp") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "kube-api-access-tpgxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.349274 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.350832 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory" (OuterVolumeSpecName: "inventory") pod "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" (UID: "eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423079 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpgxp\" (UniqueName: \"kubernetes.io/projected/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-kube-api-access-tpgxp\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423119 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.423130 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.682927 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" event={"ID":"eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f","Type":"ContainerDied","Data":"a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2"} Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.682981 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99c83884277cad4ed2cc7428f3b4ffd3675633ee45914762062beb4b88d95c2" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.683334 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.780902 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:17 crc kubenswrapper[4675]: E0124 07:27:17.781575 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.781662 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.781987 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.782864 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.787741 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.787994 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.788120 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.788224 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.843338 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932427 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:17 crc kubenswrapper[4675]: I0124 07:27:17.932552 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034521 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.034656 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.038306 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.038559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.053855 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"ssh-known-hosts-edpm-deployment-wq6r9\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.143085 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.697234 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wq6r9"] Jan 24 07:27:18 crc kubenswrapper[4675]: I0124 07:27:18.705885 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.199787 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.706425 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerStarted","Data":"b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698"} Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.706841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerStarted","Data":"ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18"} Jan 24 07:27:19 crc kubenswrapper[4675]: I0124 07:27:19.727892 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" podStartSLOduration=2.237005032 podStartE2EDuration="2.727868559s" podCreationTimestamp="2026-01-24 07:27:17 +0000 UTC" firstStartedPulling="2026-01-24 07:27:18.705636651 +0000 UTC m=+2040.001741864" lastFinishedPulling="2026-01-24 07:27:19.196500158 +0000 UTC m=+2040.492605391" observedRunningTime="2026-01-24 07:27:19.72212693 +0000 UTC m=+2041.018232153" watchObservedRunningTime="2026-01-24 07:27:19.727868559 +0000 UTC m=+2041.023973782" Jan 24 07:27:27 crc kubenswrapper[4675]: I0124 07:27:27.773497 4675 generic.go:334] "Generic (PLEG): container finished" podID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerID="b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698" exitCode=0 Jan 24 07:27:27 crc kubenswrapper[4675]: I0124 07:27:27.773574 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerDied","Data":"b538714ae8c39caa82c5c4a1821a82bf3d6640c5f8ea1e748a6ce8c9071fa698"} Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.210785 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264662 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264827 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.264897 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") pod \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\" (UID: \"191f15b0-8a3b-4dc4-bc49-9003c61619bf\") " Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.270864 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d" (OuterVolumeSpecName: "kube-api-access-5tb5d") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "kube-api-access-5tb5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.292490 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.304304 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "191f15b0-8a3b-4dc4-bc49-9003c61619bf" (UID: "191f15b0-8a3b-4dc4-bc49-9003c61619bf"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367125 4675 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367152 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tb5d\" (UniqueName: \"kubernetes.io/projected/191f15b0-8a3b-4dc4-bc49-9003c61619bf-kube-api-access-5tb5d\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.367165 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/191f15b0-8a3b-4dc4-bc49-9003c61619bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795327 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" event={"ID":"191f15b0-8a3b-4dc4-bc49-9003c61619bf","Type":"ContainerDied","Data":"ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18"} Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795371 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecaa1c9e319c5290634a023a32d36022af2d97bea126b4183f283e33357cdc18" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.795403 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wq6r9" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.880337 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:29 crc kubenswrapper[4675]: E0124 07:27:29.880913 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881000 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881264 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="191f15b0-8a3b-4dc4-bc49-9003c61619bf" containerName="ssh-known-hosts-edpm-deployment" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.881970 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.883858 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.884022 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.884086 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.889952 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.897106 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977075 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977376 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:29 crc kubenswrapper[4675]: I0124 07:27:29.977569 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079028 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079184 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.079220 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.087485 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.098174 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.099971 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ln8x2\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.202276 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.723217 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2"] Jan 24 07:27:30 crc kubenswrapper[4675]: W0124 07:27:30.726938 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bc4008d_f8c6_4745_b524_d6136632cbfb.slice/crio-62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01 WatchSource:0}: Error finding container 62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01: Status 404 returned error can't find the container with id 62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01 Jan 24 07:27:30 crc kubenswrapper[4675]: I0124 07:27:30.804670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerStarted","Data":"62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01"} Jan 24 07:27:31 crc kubenswrapper[4675]: I0124 07:27:31.816379 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerStarted","Data":"cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d"} Jan 24 07:27:31 crc kubenswrapper[4675]: I0124 07:27:31.830190 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" podStartSLOduration=2.335300604 podStartE2EDuration="2.83017023s" podCreationTimestamp="2026-01-24 07:27:29 +0000 UTC" firstStartedPulling="2026-01-24 07:27:30.729702524 +0000 UTC m=+2052.025807747" lastFinishedPulling="2026-01-24 07:27:31.22457216 +0000 UTC m=+2052.520677373" observedRunningTime="2026-01-24 07:27:31.829266419 +0000 UTC m=+2053.125371642" watchObservedRunningTime="2026-01-24 07:27:31.83017023 +0000 UTC m=+2053.126275463" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.629877 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630139 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630176 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630795 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.630837 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" gracePeriod=600 Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879054 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" exitCode=0 Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879132 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2"} Jan 24 07:27:38 crc kubenswrapper[4675]: I0124 07:27:38.879221 4675 scope.go:117] "RemoveContainer" containerID="e5700de8f7c80bc86d979977607791659b62d2561fc2ff64027b69ce1d5f9c38" Jan 24 07:27:39 crc kubenswrapper[4675]: I0124 07:27:39.892788 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} Jan 24 07:27:40 crc kubenswrapper[4675]: I0124 07:27:40.903474 4675 generic.go:334] "Generic (PLEG): container finished" podID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerID="cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d" exitCode=0 Jan 24 07:27:40 crc kubenswrapper[4675]: I0124 07:27:40.903561 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerDied","Data":"cd266032e8c5ce16488fe5a3a2a9f6e67a18bbaf6addbbdece241e4ba080673d"} Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.350280 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549532 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549800 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.549836 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") pod \"3bc4008d-f8c6-4745-b524-d6136632cbfb\" (UID: \"3bc4008d-f8c6-4745-b524-d6136632cbfb\") " Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.556967 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94" (OuterVolumeSpecName: "kube-api-access-vrj94") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "kube-api-access-vrj94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.579166 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory" (OuterVolumeSpecName: "inventory") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.583932 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3bc4008d-f8c6-4745-b524-d6136632cbfb" (UID: "3bc4008d-f8c6-4745-b524-d6136632cbfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651703 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651749 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj94\" (UniqueName: \"kubernetes.io/projected/3bc4008d-f8c6-4745-b524-d6136632cbfb-kube-api-access-vrj94\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.651761 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bc4008d-f8c6-4745-b524-d6136632cbfb-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919715 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" event={"ID":"3bc4008d-f8c6-4745-b524-d6136632cbfb","Type":"ContainerDied","Data":"62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01"} Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919783 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c943b067351557a845e030e264d8abf86513b479ca52873f44fe498d880c01" Jan 24 07:27:42 crc kubenswrapper[4675]: I0124 07:27:42.919815 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ln8x2" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.037603 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: E0124 07:27:43.038005 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038025 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038262 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc4008d-f8c6-4745-b524-d6136632cbfb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.038948 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.047935 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048023 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048315 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.048540 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.052040 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.060789 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.060914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.061040 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.162346 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.163473 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.163632 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.167609 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.167782 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.181939 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.354666 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:43 crc kubenswrapper[4675]: W0124 07:27:43.848923 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1b0570_d3a2_4029_bcf8_f41144ea0f06.slice/crio-153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299 WatchSource:0}: Error finding container 153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299: Status 404 returned error can't find the container with id 153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299 Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.850242 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw"] Jan 24 07:27:43 crc kubenswrapper[4675]: I0124 07:27:43.930402 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerStarted","Data":"153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299"} Jan 24 07:27:44 crc kubenswrapper[4675]: I0124 07:27:44.940395 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerStarted","Data":"384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a"} Jan 24 07:27:44 crc kubenswrapper[4675]: I0124 07:27:44.966893 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" podStartSLOduration=1.4226402839999999 podStartE2EDuration="1.966875556s" podCreationTimestamp="2026-01-24 07:27:43 +0000 UTC" firstStartedPulling="2026-01-24 07:27:43.85156039 +0000 UTC m=+2065.147665623" lastFinishedPulling="2026-01-24 07:27:44.395795672 +0000 UTC m=+2065.691900895" observedRunningTime="2026-01-24 07:27:44.961896245 +0000 UTC m=+2066.258001468" watchObservedRunningTime="2026-01-24 07:27:44.966875556 +0000 UTC m=+2066.262980779" Jan 24 07:27:57 crc kubenswrapper[4675]: I0124 07:27:57.047536 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerID="384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a" exitCode=0 Jan 24 07:27:57 crc kubenswrapper[4675]: I0124 07:27:57.047616 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerDied","Data":"384eb083a3203008b4bd5eb56dedcd565ab327ceb2439749925163e92ba9d96a"} Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.487583 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.664464 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.665104 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.665212 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") pod \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\" (UID: \"7b1b0570-d3a2-4029-bcf8-f41144ea0f06\") " Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.669424 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9" (OuterVolumeSpecName: "kube-api-access-969h9") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "kube-api-access-969h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.694174 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.695765 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory" (OuterVolumeSpecName: "inventory") pod "7b1b0570-d3a2-4029-bcf8-f41144ea0f06" (UID: "7b1b0570-d3a2-4029-bcf8-f41144ea0f06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766906 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969h9\" (UniqueName: \"kubernetes.io/projected/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-kube-api-access-969h9\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766932 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:58 crc kubenswrapper[4675]: I0124 07:27:58.766942 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1b0570-d3a2-4029-bcf8-f41144ea0f06-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066275 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" event={"ID":"7b1b0570-d3a2-4029-bcf8-f41144ea0f06","Type":"ContainerDied","Data":"153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299"} Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066320 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153e5260515a5ef74781522ae12a968dec2cda13eee158bf1af0107f0e4a2299" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.066376 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.163995 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:27:59 crc kubenswrapper[4675]: E0124 07:27:59.164424 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.164788 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.165067 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1b0570-d3a2-4029-bcf8-f41144ea0f06" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.165765 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.168644 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169118 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169306 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169653 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169676 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.169861 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.173534 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174242 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174248 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174301 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174359 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174387 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174423 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174476 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174530 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174610 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174641 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174684 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174841 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174890 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.174958 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.185195 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277180 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277236 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277294 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277321 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277343 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277370 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277555 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277593 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277646 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277671 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277711 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277763 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277789 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.277823 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.281399 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282199 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282415 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.282682 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.284046 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.285165 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.285651 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.287401 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.287701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.288442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.289149 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.290746 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.294890 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.300831 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:27:59 crc kubenswrapper[4675]: I0124 07:27:59.486009 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:00 crc kubenswrapper[4675]: W0124 07:28:00.051287 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d09456f_a230_420b_b288_c0dc3e8a6e22.slice/crio-e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8 WatchSource:0}: Error finding container e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8: Status 404 returned error can't find the container with id e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8 Jan 24 07:28:00 crc kubenswrapper[4675]: I0124 07:28:00.055876 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh"] Jan 24 07:28:00 crc kubenswrapper[4675]: I0124 07:28:00.074103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerStarted","Data":"e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8"} Jan 24 07:28:02 crc kubenswrapper[4675]: I0124 07:28:02.094169 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerStarted","Data":"abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876"} Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.017015 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" podStartSLOduration=5.105243446 podStartE2EDuration="6.016991405s" podCreationTimestamp="2026-01-24 07:27:59 +0000 UTC" firstStartedPulling="2026-01-24 07:28:00.054846897 +0000 UTC m=+2081.350952120" lastFinishedPulling="2026-01-24 07:28:00.966594856 +0000 UTC m=+2082.262700079" observedRunningTime="2026-01-24 07:28:02.130136022 +0000 UTC m=+2083.426241255" watchObservedRunningTime="2026-01-24 07:28:05.016991405 +0000 UTC m=+2086.313096628" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.027367 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.029738 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.041581 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200119 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200191 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.200230 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302267 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302447 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302706 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.302823 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.335193 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"certified-operators-rh8kw\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.355555 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:05 crc kubenswrapper[4675]: W0124 07:28:05.855508 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b WatchSource:0}: Error finding container cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b: Status 404 returned error can't find the container with id cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b Jan 24 07:28:05 crc kubenswrapper[4675]: I0124 07:28:05.855770 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135204 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" exitCode=0 Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b"} Jan 24 07:28:06 crc kubenswrapper[4675]: I0124 07:28:06.135274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerStarted","Data":"cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b"} Jan 24 07:28:08 crc kubenswrapper[4675]: I0124 07:28:08.157943 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" exitCode=0 Jan 24 07:28:08 crc kubenswrapper[4675]: I0124 07:28:08.158046 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd"} Jan 24 07:28:09 crc kubenswrapper[4675]: I0124 07:28:09.183644 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerStarted","Data":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} Jan 24 07:28:09 crc kubenswrapper[4675]: I0124 07:28:09.212262 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rh8kw" podStartSLOduration=2.7259307919999998 podStartE2EDuration="5.212245338s" podCreationTimestamp="2026-01-24 07:28:04 +0000 UTC" firstStartedPulling="2026-01-24 07:28:06.137950348 +0000 UTC m=+2087.434055571" lastFinishedPulling="2026-01-24 07:28:08.624264894 +0000 UTC m=+2089.920370117" observedRunningTime="2026-01-24 07:28:09.20574429 +0000 UTC m=+2090.501849503" watchObservedRunningTime="2026-01-24 07:28:09.212245338 +0000 UTC m=+2090.508350561" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.355811 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.356390 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:15 crc kubenswrapper[4675]: I0124 07:28:15.407250 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:16 crc kubenswrapper[4675]: I0124 07:28:16.288163 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:16 crc kubenswrapper[4675]: I0124 07:28:16.337489 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.256385 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rh8kw" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" containerID="cri-o://971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" gracePeriod=2 Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.673321 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.743979 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.745126 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities" (OuterVolumeSpecName: "utilities") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846340 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846390 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") pod \"3281384d-f7c7-4579-a0ef-16e9b131004c\" (UID: \"3281384d-f7c7-4579-a0ef-16e9b131004c\") " Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.846842 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.852952 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j" (OuterVolumeSpecName: "kube-api-access-j7p4j") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "kube-api-access-j7p4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.905431 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3281384d-f7c7-4579-a0ef-16e9b131004c" (UID: "3281384d-f7c7-4579-a0ef-16e9b131004c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.950339 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3281384d-f7c7-4579-a0ef-16e9b131004c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:18 crc kubenswrapper[4675]: I0124 07:28:18.950371 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7p4j\" (UniqueName: \"kubernetes.io/projected/3281384d-f7c7-4579-a0ef-16e9b131004c-kube-api-access-j7p4j\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268848 4675 generic.go:334] "Generic (PLEG): container finished" podID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" exitCode=0 Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268913 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rh8kw" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.268932 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.269324 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rh8kw" event={"ID":"3281384d-f7c7-4579-a0ef-16e9b131004c","Type":"ContainerDied","Data":"cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b"} Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.269345 4675 scope.go:117] "RemoveContainer" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.297966 4675 scope.go:117] "RemoveContainer" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.304289 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.319790 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rh8kw"] Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.345990 4675 scope.go:117] "RemoveContainer" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.375331 4675 scope.go:117] "RemoveContainer" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376095 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": container with ID starting with 971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285 not found: ID does not exist" containerID="971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376124 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285"} err="failed to get container status \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": rpc error: code = NotFound desc = could not find container \"971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285\": container with ID starting with 971c89b05a64ca9d40139ac8a97f9f439ff1bf71d5b5d409323402a3dd305285 not found: ID does not exist" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376143 4675 scope.go:117] "RemoveContainer" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376531 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": container with ID starting with 41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd not found: ID does not exist" containerID="41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376575 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd"} err="failed to get container status \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": rpc error: code = NotFound desc = could not find container \"41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd\": container with ID starting with 41ea5983e7558a263e341d5cdeed6b0786171b2f6eb11130b6a459fc93fe77dd not found: ID does not exist" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376595 4675 scope.go:117] "RemoveContainer" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: E0124 07:28:19.376944 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": container with ID starting with a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b not found: ID does not exist" containerID="a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b" Jan 24 07:28:19 crc kubenswrapper[4675]: I0124 07:28:19.376963 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b"} err="failed to get container status \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": rpc error: code = NotFound desc = could not find container \"a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b\": container with ID starting with a0fa1036a0dd5d53eec32e4c5f6920fbead1893d35a9f8366a08559d6c37681b not found: ID does not exist" Jan 24 07:28:20 crc kubenswrapper[4675]: I0124 07:28:20.951711 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" path="/var/lib/kubelet/pods/3281384d-f7c7-4579-a0ef-16e9b131004c/volumes" Jan 24 07:28:27 crc kubenswrapper[4675]: E0124 07:28:27.099782 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:37 crc kubenswrapper[4675]: E0124 07:28:37.362525 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:44 crc kubenswrapper[4675]: I0124 07:28:44.484607 4675 generic.go:334] "Generic (PLEG): container finished" podID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerID="abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876" exitCode=0 Jan 24 07:28:44 crc kubenswrapper[4675]: I0124 07:28:44.484780 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerDied","Data":"abcb54c0d3418aeb602c69c7b34a550299227d39f039236a8c91cb87b227c876"} Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.053840 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220232 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220710 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220820 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.220929 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221069 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221160 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221252 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221351 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221437 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221573 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221651 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221753 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221841 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.221940 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") pod \"2d09456f-a230-420b-b288-c0dc3e8a6e22\" (UID: \"2d09456f-a230-420b-b288-c0dc3e8a6e22\") " Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.226370 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.227112 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7" (OuterVolumeSpecName: "kube-api-access-5smg7") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "kube-api-access-5smg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229217 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229698 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.229847 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.231055 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.231307 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.234165 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.234528 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.235797 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.240853 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.242889 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.251819 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.259900 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory" (OuterVolumeSpecName: "inventory") pod "2d09456f-a230-420b-b288-c0dc3e8a6e22" (UID: "2d09456f-a230-420b-b288-c0dc3e8a6e22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324092 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324268 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324359 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324425 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324484 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324543 4675 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324600 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324663 4675 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324840 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.324943 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325028 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325091 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325151 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d09456f-a230-420b-b288-c0dc3e8a6e22-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.325214 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5smg7\" (UniqueName: \"kubernetes.io/projected/2d09456f-a230-420b-b288-c0dc3e8a6e22-kube-api-access-5smg7\") on node \"crc\" DevicePath \"\"" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501559 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" event={"ID":"2d09456f-a230-420b-b288-c0dc3e8a6e22","Type":"ContainerDied","Data":"e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8"} Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501872 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e17492474e3aa3de10585a9194f9e6a5ae485bbb43d7f063936f087a53e423e8" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.501593 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710389 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710916 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710939 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710966 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-content" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.710975 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-content" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.710990 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-utilities" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711000 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="extract-utilities" Jan 24 07:28:46 crc kubenswrapper[4675]: E0124 07:28:46.711030 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711040 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711251 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281384d-f7c7-4579-a0ef-16e9b131004c" containerName="registry-server" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.711273 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d09456f-a230-420b-b288-c0dc3e8a6e22" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.713352 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.717124 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.717293 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722159 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722369 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.722926 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.741369 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.835975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836050 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836096 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836138 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.836185 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938006 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938088 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938805 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938856 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.938904 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.939603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.943266 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.955044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.955481 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:46 crc kubenswrapper[4675]: I0124 07:28:46.964961 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7vbln\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:47 crc kubenswrapper[4675]: I0124 07:28:47.036350 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:28:47 crc kubenswrapper[4675]: I0124 07:28:47.653316 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln"] Jan 24 07:28:47 crc kubenswrapper[4675]: E0124 07:28:47.709833 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.521554 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerStarted","Data":"b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9"} Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.521997 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerStarted","Data":"ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b"} Jan 24 07:28:48 crc kubenswrapper[4675]: I0124 07:28:48.535600 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" podStartSLOduration=2.064189265 podStartE2EDuration="2.535582251s" podCreationTimestamp="2026-01-24 07:28:46 +0000 UTC" firstStartedPulling="2026-01-24 07:28:47.654691862 +0000 UTC m=+2128.950797085" lastFinishedPulling="2026-01-24 07:28:48.126084848 +0000 UTC m=+2129.422190071" observedRunningTime="2026-01-24 07:28:48.535181912 +0000 UTC m=+2129.831287135" watchObservedRunningTime="2026-01-24 07:28:48.535582251 +0000 UTC m=+2129.831687474" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.626462 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.628704 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.643191 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794406 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794666 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.794861 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.830031 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.831752 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.888239 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896737 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896827 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.896944 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.897167 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.897374 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.925434 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"community-operators-v6cgb\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:49 crc kubenswrapper[4675]: I0124 07:28:49.953638 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.998914 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.999071 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:49.999108 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100640 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100818 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.100847 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.101213 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.101317 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.129767 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"redhat-marketplace-h8hbw\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.147744 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:28:50 crc kubenswrapper[4675]: I0124 07:28:50.666682 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:28:50 crc kubenswrapper[4675]: W0124 07:28:50.691706 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a329bb_9b28_4ce2_bf06_3eab6c480c22.slice/crio-e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61 WatchSource:0}: Error finding container e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61: Status 404 returned error can't find the container with id e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.092278 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563041 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264" exitCode=0 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563126 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.563163 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"d36464ed060d00cae74b46458c91b3d1d178d0201cf341fe273c0e02b6c5bbe1"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564831 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" exitCode=0 Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564850 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027"} Jan 24 07:28:51 crc kubenswrapper[4675]: I0124 07:28:51.564866 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61"} Jan 24 07:28:52 crc kubenswrapper[4675]: I0124 07:28:52.577030 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} Jan 24 07:28:53 crc kubenswrapper[4675]: I0124 07:28:53.589344 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285"} Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.623903 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" exitCode=0 Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.623963 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.628147 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285" exitCode=0 Jan 24 07:28:56 crc kubenswrapper[4675]: I0124 07:28:56.628199 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285"} Jan 24 07:28:57 crc kubenswrapper[4675]: E0124 07:28:57.997106 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache]" Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.658660 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerStarted","Data":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.668514 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerStarted","Data":"4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e"} Jan 24 07:28:58 crc kubenswrapper[4675]: I0124 07:28:58.700577 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v6cgb" podStartSLOduration=3.968572093 podStartE2EDuration="9.700529345s" podCreationTimestamp="2026-01-24 07:28:49 +0000 UTC" firstStartedPulling="2026-01-24 07:28:51.566569721 +0000 UTC m=+2132.862674944" lastFinishedPulling="2026-01-24 07:28:57.298526973 +0000 UTC m=+2138.594632196" observedRunningTime="2026-01-24 07:28:58.68959384 +0000 UTC m=+2139.985699063" watchObservedRunningTime="2026-01-24 07:28:58.700529345 +0000 UTC m=+2139.996634598" Jan 24 07:28:59 crc kubenswrapper[4675]: I0124 07:28:59.953935 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:28:59 crc kubenswrapper[4675]: I0124 07:28:59.953987 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:00 crc kubenswrapper[4675]: I0124 07:29:00.149300 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:00 crc kubenswrapper[4675]: I0124 07:29:00.149680 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:01 crc kubenswrapper[4675]: I0124 07:29:01.115266 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v6cgb" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" probeResult="failure" output=< Jan 24 07:29:01 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:29:01 crc kubenswrapper[4675]: > Jan 24 07:29:01 crc kubenswrapper[4675]: I0124 07:29:01.192667 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h8hbw" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" probeResult="failure" output=< Jan 24 07:29:01 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:29:01 crc kubenswrapper[4675]: > Jan 24 07:29:08 crc kubenswrapper[4675]: E0124 07:29:08.210983 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache]" Jan 24 07:29:09 crc kubenswrapper[4675]: I0124 07:29:09.998365 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.025221 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h8hbw" podStartSLOduration=15.389534198 podStartE2EDuration="21.025191875s" podCreationTimestamp="2026-01-24 07:28:49 +0000 UTC" firstStartedPulling="2026-01-24 07:28:51.565204907 +0000 UTC m=+2132.861310130" lastFinishedPulling="2026-01-24 07:28:57.200862594 +0000 UTC m=+2138.496967807" observedRunningTime="2026-01-24 07:28:58.724668031 +0000 UTC m=+2140.020773254" watchObservedRunningTime="2026-01-24 07:29:10.025191875 +0000 UTC m=+2151.321297138" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.055484 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.194272 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.236834 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:10 crc kubenswrapper[4675]: I0124 07:29:10.251245 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:11 crc kubenswrapper[4675]: I0124 07:29:11.265548 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v6cgb" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" containerID="cri-o://7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" gracePeriod=2 Jan 24 07:29:11 crc kubenswrapper[4675]: I0124 07:29:11.898692 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086101 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086315 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086441 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") pod \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\" (UID: \"35a329bb-9b28-4ce2-bf06-3eab6c480c22\") " Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.086985 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities" (OuterVolumeSpecName: "utilities") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.087521 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.092088 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7" (OuterVolumeSpecName: "kube-api-access-hwvz7") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "kube-api-access-hwvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.144665 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a329bb-9b28-4ce2-bf06-3eab6c480c22" (UID: "35a329bb-9b28-4ce2-bf06-3eab6c480c22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.189265 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a329bb-9b28-4ce2-bf06-3eab6c480c22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.189299 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwvz7\" (UniqueName: \"kubernetes.io/projected/35a329bb-9b28-4ce2-bf06-3eab6c480c22-kube-api-access-hwvz7\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275482 4675 generic.go:334] "Generic (PLEG): container finished" podID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" exitCode=0 Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275540 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6cgb" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275545 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.275987 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6cgb" event={"ID":"35a329bb-9b28-4ce2-bf06-3eab6c480c22","Type":"ContainerDied","Data":"e33aa3a3a2da850636ed533d417ddd47ba91543378ce7f702e237bbac528dc61"} Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.276011 4675 scope.go:117] "RemoveContainer" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.339881 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.345010 4675 scope.go:117] "RemoveContainer" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.353881 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v6cgb"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.455120 4675 scope.go:117] "RemoveContainer" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.460182 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.467799 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h8hbw" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" containerID="cri-o://4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" gracePeriod=2 Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.582417 4675 scope.go:117] "RemoveContainer" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.586426 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": container with ID starting with 7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d not found: ID does not exist" containerID="7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.586465 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d"} err="failed to get container status \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": rpc error: code = NotFound desc = could not find container \"7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d\": container with ID starting with 7fdb02d4f178013c75d4f482907dce624092f86f5c0cf3ad6fb7e06bc685da9d not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.586491 4675 scope.go:117] "RemoveContainer" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.587308 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": container with ID starting with 2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec not found: ID does not exist" containerID="2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587331 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec"} err="failed to get container status \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": rpc error: code = NotFound desc = could not find container \"2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec\": container with ID starting with 2fe150c0b7f4dcf2bbbb90fc27143aab2a01022c224914825b956930a686aaec not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587344 4675 scope.go:117] "RemoveContainer" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: E0124 07:29:12.587565 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": container with ID starting with 21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027 not found: ID does not exist" containerID="21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.587583 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027"} err="failed to get container status \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": rpc error: code = NotFound desc = could not find container \"21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027\": container with ID starting with 21df6239d35da5e6c673bf973ef20a1e0cb90fcbc4c35aaef176115b763d8027 not found: ID does not exist" Jan 24 07:29:12 crc kubenswrapper[4675]: I0124 07:29:12.953283 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" path="/var/lib/kubelet/pods/35a329bb-9b28-4ce2-bf06-3eab6c480c22/volumes" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.290978 4675 generic.go:334] "Generic (PLEG): container finished" podID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerID="4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" exitCode=0 Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.291040 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e"} Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.565737 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719455 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719552 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.719630 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") pod \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\" (UID: \"a04cc3fe-10f9-4d63-b55d-3717957a05cb\") " Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.720423 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities" (OuterVolumeSpecName: "utilities") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.728262 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46" (OuterVolumeSpecName: "kube-api-access-4cn46") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "kube-api-access-4cn46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.739918 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a04cc3fe-10f9-4d63-b55d-3717957a05cb" (UID: "a04cc3fe-10f9-4d63-b55d-3717957a05cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822048 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cn46\" (UniqueName: \"kubernetes.io/projected/a04cc3fe-10f9-4d63-b55d-3717957a05cb-kube-api-access-4cn46\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822083 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:13 crc kubenswrapper[4675]: I0124 07:29:13.822092 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04cc3fe-10f9-4d63-b55d-3717957a05cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303264 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8hbw" event={"ID":"a04cc3fe-10f9-4d63-b55d-3717957a05cb","Type":"ContainerDied","Data":"d36464ed060d00cae74b46458c91b3d1d178d0201cf341fe273c0e02b6c5bbe1"} Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303633 4675 scope.go:117] "RemoveContainer" containerID="4e7f3af280a659bcaa6cd2fac0059a4618004e98d083cf59fd090c570fe7e36e" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.303363 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8hbw" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.326494 4675 scope.go:117] "RemoveContainer" containerID="1de9f85016465f0bd2647bc6ec7fc555fb0db597391bbad5d3605a6da326b285" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.345452 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.355753 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8hbw"] Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.356447 4675 scope.go:117] "RemoveContainer" containerID="8a89e0914a7f452c1841d416015d4a4a17414c25699d2ec756c984d8c9a13264" Jan 24 07:29:14 crc kubenswrapper[4675]: I0124 07:29:14.955349 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" path="/var/lib/kubelet/pods/a04cc3fe-10f9-4d63-b55d-3717957a05cb/volumes" Jan 24 07:29:18 crc kubenswrapper[4675]: E0124 07:29:18.432463 4675 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3281384d_f7c7_4579_a0ef_16e9b131004c.slice/crio-cae04bcce90a2151345da7cb5227e8258811cd4a89d637fa2c68304d59a2c07b\": RecentStats: unable to find data in memory cache]" Jan 24 07:29:38 crc kubenswrapper[4675]: I0124 07:29:38.630438 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:29:38 crc kubenswrapper[4675]: I0124 07:29:38.630931 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.150387 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151160 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151172 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151195 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151232 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151247 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151253 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151278 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151284 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-content" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151294 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151299 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: E0124 07:30:00.151313 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="extract-utilities" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151470 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a329bb-9b28-4ce2-bf06-3eab6c480c22" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.151486 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04cc3fe-10f9-4d63-b55d-3717957a05cb" containerName="registry-server" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.152131 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.158862 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.159434 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.163155 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307514 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307831 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.307994 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.409979 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.410141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.410272 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.411999 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.415835 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.444019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"collect-profiles-29487330-jftpz\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:00 crc kubenswrapper[4675]: I0124 07:30:00.480662 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:01 crc kubenswrapper[4675]: I0124 07:30:01.437490 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz"] Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081545 4675 generic.go:334] "Generic (PLEG): container finished" podID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerID="22d304cf2a431fcb8000201803e0e5ad01887c331686ce49b53237ea0966b67d" exitCode=0 Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerDied","Data":"22d304cf2a431fcb8000201803e0e5ad01887c331686ce49b53237ea0966b67d"} Jan 24 07:30:02 crc kubenswrapper[4675]: I0124 07:30:02.081863 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerStarted","Data":"5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924"} Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.419323 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.488529 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.488954 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.489281 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") pod \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\" (UID: \"0e2fd991-68ff-45d8-bc15-e245d8de85b6\") " Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.490297 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.494219 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.500073 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk" (OuterVolumeSpecName: "kube-api-access-rlztk") pod "0e2fd991-68ff-45d8-bc15-e245d8de85b6" (UID: "0e2fd991-68ff-45d8-bc15-e245d8de85b6"). InnerVolumeSpecName "kube-api-access-rlztk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591617 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2fd991-68ff-45d8-bc15-e245d8de85b6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591647 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlztk\" (UniqueName: \"kubernetes.io/projected/0e2fd991-68ff-45d8-bc15-e245d8de85b6-kube-api-access-rlztk\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:03 crc kubenswrapper[4675]: I0124 07:30:03.591659 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e2fd991-68ff-45d8-bc15-e245d8de85b6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098360 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" event={"ID":"0e2fd991-68ff-45d8-bc15-e245d8de85b6","Type":"ContainerDied","Data":"5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924"} Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098401 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5007302a9d841f599454539ac30af2b2e22d273022c50491f661165afa9ea924" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.098437 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487330-jftpz" Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.499036 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.506154 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487285-lfs59"] Jan 24 07:30:04 crc kubenswrapper[4675]: I0124 07:30:04.953534 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4201e4-a1e0-4256-aa5a-67383ee87bee" path="/var/lib/kubelet/pods/0b4201e4-a1e0-4256-aa5a-67383ee87bee/volumes" Jan 24 07:30:08 crc kubenswrapper[4675]: I0124 07:30:08.630047 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:30:08 crc kubenswrapper[4675]: I0124 07:30:08.630126 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:09 crc kubenswrapper[4675]: I0124 07:30:09.142598 4675 generic.go:334] "Generic (PLEG): container finished" podID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerID="b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9" exitCode=0 Jan 24 07:30:09 crc kubenswrapper[4675]: I0124 07:30:09.142648 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerDied","Data":"b05aacdca52badee5e9189ade6e71c65277d2313178c6e5ed17f772dd8c61fe9"} Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.614695 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734574 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734881 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734918 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.734991 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.735096 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") pod \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\" (UID: \"3e407880-d27a-4aa2-bb81-a87bb20ffcf1\") " Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.743063 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb" (OuterVolumeSpecName: "kube-api-access-2chfb") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "kube-api-access-2chfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.763588 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.772422 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.775861 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.776091 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory" (OuterVolumeSpecName: "inventory") pod "3e407880-d27a-4aa2-bb81-a87bb20ffcf1" (UID: "3e407880-d27a-4aa2-bb81-a87bb20ffcf1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837685 4675 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837741 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837753 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2chfb\" (UniqueName: \"kubernetes.io/projected/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-kube-api-access-2chfb\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837762 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:10 crc kubenswrapper[4675]: I0124 07:30:10.837771 4675 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e407880-d27a-4aa2-bb81-a87bb20ffcf1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.162892 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" event={"ID":"3e407880-d27a-4aa2-bb81-a87bb20ffcf1","Type":"ContainerDied","Data":"ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b"} Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.163433 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb9f54d74c2e9997a2e10177f7d4e869a6f79eac0c430b061da4c9675fb428b" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.162947 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7vbln" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329252 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:11 crc kubenswrapper[4675]: E0124 07:30:11.329823 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329848 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: E0124 07:30:11.329875 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.329885 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330112 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e407880-d27a-4aa2-bb81-a87bb20ffcf1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330152 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2fd991-68ff-45d8-bc15-e245d8de85b6" containerName="collect-profiles" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.330918 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.339957 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.340052 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.339971 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.340227 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.343310 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.343586 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.353255 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452680 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452785 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452832 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452867 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452913 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.452931 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554483 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554644 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554744 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554773 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.554843 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.564340 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.565204 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.565472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.583403 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.586983 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.591031 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:11 crc kubenswrapper[4675]: I0124 07:30:11.658975 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:30:12 crc kubenswrapper[4675]: I0124 07:30:12.205588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g"] Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.182946 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerStarted","Data":"b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0"} Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.183369 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerStarted","Data":"99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b"} Jan 24 07:30:13 crc kubenswrapper[4675]: I0124 07:30:13.206157 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" podStartSLOduration=1.7352952529999999 podStartE2EDuration="2.206138603s" podCreationTimestamp="2026-01-24 07:30:11 +0000 UTC" firstStartedPulling="2026-01-24 07:30:12.229788861 +0000 UTC m=+2213.525894084" lastFinishedPulling="2026-01-24 07:30:12.700632211 +0000 UTC m=+2213.996737434" observedRunningTime="2026-01-24 07:30:13.197648547 +0000 UTC m=+2214.493753780" watchObservedRunningTime="2026-01-24 07:30:13.206138603 +0000 UTC m=+2214.502243826" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630048 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630598 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.630646 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.631528 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:30:38 crc kubenswrapper[4675]: I0124 07:30:38.631586 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" gracePeriod=600 Jan 24 07:30:38 crc kubenswrapper[4675]: E0124 07:30:38.753054 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435840 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" exitCode=0 Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435884 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71"} Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.435915 4675 scope.go:117] "RemoveContainer" containerID="a1c9273bc1d397c7b2b3e725108610e2ba92ba82858b4b6dca89da8ddff34bf2" Jan 24 07:30:39 crc kubenswrapper[4675]: I0124 07:30:39.436635 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:30:39 crc kubenswrapper[4675]: E0124 07:30:39.436938 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:30:41 crc kubenswrapper[4675]: I0124 07:30:41.278076 4675 scope.go:117] "RemoveContainer" containerID="a7c88f78a0b2d3479a858654ffc24e4044f89c1ce4d62775bbcc5f9d5bd1b775" Jan 24 07:30:54 crc kubenswrapper[4675]: I0124 07:30:54.942380 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:30:54 crc kubenswrapper[4675]: E0124 07:30:54.943115 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:07 crc kubenswrapper[4675]: I0124 07:31:07.943097 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:07 crc kubenswrapper[4675]: E0124 07:31:07.943848 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:16 crc kubenswrapper[4675]: I0124 07:31:16.199044 4675 generic.go:334] "Generic (PLEG): container finished" podID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerID="b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0" exitCode=0 Jan 24 07:31:16 crc kubenswrapper[4675]: I0124 07:31:16.199551 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerDied","Data":"b5e680060e34c52cdc6b24399e8e4a401f9386d221e4f141b5df1377a0d9a3c0"} Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.637293 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726041 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726160 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726215 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726256 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.726342 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") pod \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\" (UID: \"388e10c7-15e4-40d5-94ed-5c6612f7fbfe\") " Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.734236 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.734302 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm" (OuterVolumeSpecName: "kube-api-access-c7qcm") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "kube-api-access-c7qcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.756607 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.758000 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory" (OuterVolumeSpecName: "inventory") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.758083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.759054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "388e10c7-15e4-40d5-94ed-5c6612f7fbfe" (UID: "388e10c7-15e4-40d5-94ed-5c6612f7fbfe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829415 4675 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829557 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829573 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7qcm\" (UniqueName: \"kubernetes.io/projected/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-kube-api-access-c7qcm\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829584 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829593 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:17 crc kubenswrapper[4675]: I0124 07:31:17.829601 4675 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/388e10c7-15e4-40d5-94ed-5c6612f7fbfe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219274 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" event={"ID":"388e10c7-15e4-40d5-94ed-5c6612f7fbfe","Type":"ContainerDied","Data":"99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b"} Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219327 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99dfba729f8f540b1cab59f4ba1e3806fcedb3caf6cfa61b89f8e3387a1f3f2b" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.219328 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.370345 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:18 crc kubenswrapper[4675]: E0124 07:31:18.371314 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.371339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.371597 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="388e10c7-15e4-40d5-94ed-5c6612f7fbfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.372423 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.374823 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.377548 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380322 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380405 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.380600 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.394324 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441782 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441876 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.441911 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.442232 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.442448 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550174 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550479 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.550624 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.551064 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.551199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.556588 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.556896 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.560371 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.566442 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.569263 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:18 crc kubenswrapper[4675]: I0124 07:31:18.714044 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:31:19 crc kubenswrapper[4675]: I0124 07:31:19.255954 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq"] Jan 24 07:31:20 crc kubenswrapper[4675]: I0124 07:31:20.009134 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:31:20 crc kubenswrapper[4675]: I0124 07:31:20.239407 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerStarted","Data":"8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d"} Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.248810 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerStarted","Data":"3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e"} Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.277460 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" podStartSLOduration=2.5452895509999998 podStartE2EDuration="3.277444476s" podCreationTimestamp="2026-01-24 07:31:18 +0000 UTC" firstStartedPulling="2026-01-24 07:31:19.27448369 +0000 UTC m=+2280.570588913" lastFinishedPulling="2026-01-24 07:31:20.006638595 +0000 UTC m=+2281.302743838" observedRunningTime="2026-01-24 07:31:21.272198468 +0000 UTC m=+2282.568303691" watchObservedRunningTime="2026-01-24 07:31:21.277444476 +0000 UTC m=+2282.573549699" Jan 24 07:31:21 crc kubenswrapper[4675]: I0124 07:31:21.943102 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:21 crc kubenswrapper[4675]: E0124 07:31:21.943816 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:34 crc kubenswrapper[4675]: I0124 07:31:34.942619 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:34 crc kubenswrapper[4675]: E0124 07:31:34.943571 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:46 crc kubenswrapper[4675]: I0124 07:31:46.943270 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:46 crc kubenswrapper[4675]: E0124 07:31:46.944099 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:31:59 crc kubenswrapper[4675]: I0124 07:31:59.942185 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:31:59 crc kubenswrapper[4675]: E0124 07:31:59.943069 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:11 crc kubenswrapper[4675]: I0124 07:32:11.943081 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:11 crc kubenswrapper[4675]: E0124 07:32:11.944159 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:26 crc kubenswrapper[4675]: I0124 07:32:26.942902 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:26 crc kubenswrapper[4675]: E0124 07:32:26.943763 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:38 crc kubenswrapper[4675]: I0124 07:32:38.949862 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:38 crc kubenswrapper[4675]: E0124 07:32:38.950537 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:32:53 crc kubenswrapper[4675]: I0124 07:32:53.942178 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:32:53 crc kubenswrapper[4675]: E0124 07:32:53.942853 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:07 crc kubenswrapper[4675]: I0124 07:33:07.943438 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:07 crc kubenswrapper[4675]: E0124 07:33:07.944766 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:22 crc kubenswrapper[4675]: I0124 07:33:22.943594 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:22 crc kubenswrapper[4675]: E0124 07:33:22.944939 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.722866 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.725280 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.737254 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780132 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780199 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.780356 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882118 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882515 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.882564 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.883103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.883172 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:31 crc kubenswrapper[4675]: I0124 07:33:31.901103 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"redhat-operators-txpwk\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:32 crc kubenswrapper[4675]: I0124 07:33:32.100295 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:32 crc kubenswrapper[4675]: I0124 07:33:32.581947 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.565693 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" exitCode=0 Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.566362 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478"} Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.566419 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"437f6576151dec232bbda2f3fa25c23c1adeb0d977d51222f8641a5e83331e27"} Jan 24 07:33:33 crc kubenswrapper[4675]: I0124 07:33:33.569220 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:33:34 crc kubenswrapper[4675]: I0124 07:33:34.576248 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} Jan 24 07:33:34 crc kubenswrapper[4675]: I0124 07:33:34.944036 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:34 crc kubenswrapper[4675]: E0124 07:33:34.944572 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:38 crc kubenswrapper[4675]: I0124 07:33:38.614236 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" exitCode=0 Jan 24 07:33:38 crc kubenswrapper[4675]: I0124 07:33:38.614622 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} Jan 24 07:33:39 crc kubenswrapper[4675]: I0124 07:33:39.626803 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerStarted","Data":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} Jan 24 07:33:39 crc kubenswrapper[4675]: I0124 07:33:39.651705 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txpwk" podStartSLOduration=3.22403018 podStartE2EDuration="8.651686799s" podCreationTimestamp="2026-01-24 07:33:31 +0000 UTC" firstStartedPulling="2026-01-24 07:33:33.569003323 +0000 UTC m=+2414.865108546" lastFinishedPulling="2026-01-24 07:33:38.996659932 +0000 UTC m=+2420.292765165" observedRunningTime="2026-01-24 07:33:39.643332296 +0000 UTC m=+2420.939437529" watchObservedRunningTime="2026-01-24 07:33:39.651686799 +0000 UTC m=+2420.947792022" Jan 24 07:33:42 crc kubenswrapper[4675]: I0124 07:33:42.100842 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:42 crc kubenswrapper[4675]: I0124 07:33:42.101765 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:43 crc kubenswrapper[4675]: I0124 07:33:43.153381 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txpwk" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" probeResult="failure" output=< Jan 24 07:33:43 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:33:43 crc kubenswrapper[4675]: > Jan 24 07:33:46 crc kubenswrapper[4675]: I0124 07:33:46.943564 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:46 crc kubenswrapper[4675]: E0124 07:33:46.944066 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.157237 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.221439 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:52 crc kubenswrapper[4675]: I0124 07:33:52.393264 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:53 crc kubenswrapper[4675]: I0124 07:33:53.770029 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txpwk" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" containerID="cri-o://adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" gracePeriod=2 Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.205428 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.226861 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.226995 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.227059 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") pod \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\" (UID: \"5ab038a6-becf-4e29-9a38-9a92e2e7df69\") " Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.227774 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities" (OuterVolumeSpecName: "utilities") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.237417 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc" (OuterVolumeSpecName: "kube-api-access-xnwkc") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "kube-api-access-xnwkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.329916 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.330251 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwkc\" (UniqueName: \"kubernetes.io/projected/5ab038a6-becf-4e29-9a38-9a92e2e7df69-kube-api-access-xnwkc\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.343216 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ab038a6-becf-4e29-9a38-9a92e2e7df69" (UID: "5ab038a6-becf-4e29-9a38-9a92e2e7df69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.432422 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab038a6-becf-4e29-9a38-9a92e2e7df69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.778983 4675 generic.go:334] "Generic (PLEG): container finished" podID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" exitCode=0 Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779020 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779053 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txpwk" event={"ID":"5ab038a6-becf-4e29-9a38-9a92e2e7df69","Type":"ContainerDied","Data":"437f6576151dec232bbda2f3fa25c23c1adeb0d977d51222f8641a5e83331e27"} Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779062 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txpwk" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.779072 4675 scope.go:117] "RemoveContainer" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.800685 4675 scope.go:117] "RemoveContainer" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.829102 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.832727 4675 scope.go:117] "RemoveContainer" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.842100 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txpwk"] Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.883589 4675 scope.go:117] "RemoveContainer" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.884272 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": container with ID starting with adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5 not found: ID does not exist" containerID="adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.884324 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5"} err="failed to get container status \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": rpc error: code = NotFound desc = could not find container \"adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5\": container with ID starting with adbe34267bc5e08612a5e6629786b9a2a3edb03ca9b73323427f94704a4965e5 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.884380 4675 scope.go:117] "RemoveContainer" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.884942 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": container with ID starting with c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1 not found: ID does not exist" containerID="c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885012 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1"} err="failed to get container status \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": rpc error: code = NotFound desc = could not find container \"c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1\": container with ID starting with c2f438320488069b86c9662230f67d842e3c3f9364aa62118aaddbced50ea2c1 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885049 4675 scope.go:117] "RemoveContainer" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: E0124 07:33:54.885488 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": container with ID starting with 2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478 not found: ID does not exist" containerID="2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.885538 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478"} err="failed to get container status \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": rpc error: code = NotFound desc = could not find container \"2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478\": container with ID starting with 2f3b4d7d08955b8e838cc748258a627355aa99429fa83bf13779dad5def1c478 not found: ID does not exist" Jan 24 07:33:54 crc kubenswrapper[4675]: I0124 07:33:54.959698 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" path="/var/lib/kubelet/pods/5ab038a6-becf-4e29-9a38-9a92e2e7df69/volumes" Jan 24 07:33:58 crc kubenswrapper[4675]: I0124 07:33:58.948360 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:33:58 crc kubenswrapper[4675]: E0124 07:33:58.949295 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:10 crc kubenswrapper[4675]: I0124 07:34:10.943364 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:10 crc kubenswrapper[4675]: E0124 07:34:10.944179 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:22 crc kubenswrapper[4675]: I0124 07:34:22.944535 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:22 crc kubenswrapper[4675]: E0124 07:34:22.945326 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:35 crc kubenswrapper[4675]: I0124 07:34:35.942609 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:35 crc kubenswrapper[4675]: E0124 07:34:35.943248 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:34:50 crc kubenswrapper[4675]: I0124 07:34:50.945668 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:34:50 crc kubenswrapper[4675]: E0124 07:34:50.946500 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:04 crc kubenswrapper[4675]: I0124 07:35:04.948701 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:04 crc kubenswrapper[4675]: E0124 07:35:04.950473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:18 crc kubenswrapper[4675]: I0124 07:35:18.949383 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:18 crc kubenswrapper[4675]: E0124 07:35:18.951159 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:32 crc kubenswrapper[4675]: I0124 07:35:32.942875 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:32 crc kubenswrapper[4675]: E0124 07:35:32.943889 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:35:46 crc kubenswrapper[4675]: I0124 07:35:46.942817 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:35:48 crc kubenswrapper[4675]: I0124 07:35:48.012472 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} Jan 24 07:37:44 crc kubenswrapper[4675]: I0124 07:37:44.070969 4675 generic.go:334] "Generic (PLEG): container finished" podID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerID="3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e" exitCode=0 Jan 24 07:37:44 crc kubenswrapper[4675]: I0124 07:37:44.071051 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerDied","Data":"3cc600e74f559f08c6e306f068e4792c472d7b6a953a06f392eff3d56a90133e"} Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.573554 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598458 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598602 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598630 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598673 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.598797 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") pod \"d457c71e-ef41-4bf9-a59b-b3221df26b41\" (UID: \"d457c71e-ef41-4bf9-a59b-b3221df26b41\") " Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.604602 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s" (OuterVolumeSpecName: "kube-api-access-t7l7s") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "kube-api-access-t7l7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.620740 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.639495 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory" (OuterVolumeSpecName: "inventory") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.647061 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.649507 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d457c71e-ef41-4bf9-a59b-b3221df26b41" (UID: "d457c71e-ef41-4bf9-a59b-b3221df26b41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700879 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700907 4675 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700917 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700925 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d457c71e-ef41-4bf9-a59b-b3221df26b41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:45 crc kubenswrapper[4675]: I0124 07:37:45.700934 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7l7s\" (UniqueName: \"kubernetes.io/projected/d457c71e-ef41-4bf9-a59b-b3221df26b41-kube-api-access-t7l7s\") on node \"crc\" DevicePath \"\"" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097529 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" event={"ID":"d457c71e-ef41-4bf9-a59b-b3221df26b41","Type":"ContainerDied","Data":"8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d"} Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097564 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8264419fbcac8d097619ac8c8a3c44cfde990740dcabe45435c29debb765207d" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.097648 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.203895 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204316 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204339 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204353 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204365 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204404 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-utilities" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204414 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-utilities" Jan 24 07:37:46 crc kubenswrapper[4675]: E0124 07:37:46.204448 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-content" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204459 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="extract-content" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204760 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="d457c71e-ef41-4bf9-a59b-b3221df26b41" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.204795 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab038a6-becf-4e29-9a38-9a92e2e7df69" containerName="registry-server" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.205710 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209111 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209448 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209466 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209490 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.209522 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.210087 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.210170 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.219686 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.311665 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312048 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312243 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312402 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312548 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312673 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312819 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.312947 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.313088 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415195 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415264 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415432 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415492 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415522 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415553 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415588 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.415644 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.418953 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.420893 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430408 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430701 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.430998 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.431439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.431543 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.436768 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k8fng\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:46 crc kubenswrapper[4675]: I0124 07:37:46.528211 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:37:47 crc kubenswrapper[4675]: I0124 07:37:47.135477 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng"] Jan 24 07:37:48 crc kubenswrapper[4675]: I0124 07:37:48.135419 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerStarted","Data":"2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b"} Jan 24 07:37:49 crc kubenswrapper[4675]: I0124 07:37:49.146193 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerStarted","Data":"c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b"} Jan 24 07:37:49 crc kubenswrapper[4675]: I0124 07:37:49.192332 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" podStartSLOduration=2.368853603 podStartE2EDuration="3.192314405s" podCreationTimestamp="2026-01-24 07:37:46 +0000 UTC" firstStartedPulling="2026-01-24 07:37:47.148028484 +0000 UTC m=+2668.444133707" lastFinishedPulling="2026-01-24 07:37:47.971489266 +0000 UTC m=+2669.267594509" observedRunningTime="2026-01-24 07:37:49.189959758 +0000 UTC m=+2670.486064981" watchObservedRunningTime="2026-01-24 07:37:49.192314405 +0000 UTC m=+2670.488419628" Jan 24 07:38:08 crc kubenswrapper[4675]: I0124 07:38:08.629928 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:38:08 crc kubenswrapper[4675]: I0124 07:38:08.630502 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:38:38 crc kubenswrapper[4675]: I0124 07:38:38.630554 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:38:38 crc kubenswrapper[4675]: I0124 07:38:38.631638 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630069 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630676 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.630748 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.631526 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.631580 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" gracePeriod=600 Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867439 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" exitCode=0 Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867481 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4"} Jan 24 07:39:08 crc kubenswrapper[4675]: I0124 07:39:08.867516 4675 scope.go:117] "RemoveContainer" containerID="aa72bab78674d8f271e3639bcf25579845a009cb81fee9399f78472a1ad17c71" Jan 24 07:39:09 crc kubenswrapper[4675]: I0124 07:39:09.880843 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.859433 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.861641 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.868007 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.941637 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.941975 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:57 crc kubenswrapper[4675]: I0124 07:39:57.942251 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044413 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.044517 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.045273 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.045328 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.066395 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"redhat-marketplace-77wsd\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.193283 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:39:58 crc kubenswrapper[4675]: I0124 07:39:58.712442 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294181 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" exitCode=0 Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687"} Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.294480 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"1267f1241a6b5e2d15d5b790f6e8a19388ac394099a6d4aa48d1de2bd8be8847"} Jan 24 07:39:59 crc kubenswrapper[4675]: I0124 07:39:59.297349 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.228543 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.230626 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.250173 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291956 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.291980 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.303610 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393611 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393675 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.393793 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.394142 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.394225 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.425195 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"community-operators-z56sj\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:00 crc kubenswrapper[4675]: I0124 07:40:00.547948 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:02 crc kubenswrapper[4675]: I0124 07:40:02.835980 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:02 crc kubenswrapper[4675]: W0124 07:40:02.849940 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ca4082_bd83_4643_bbe5_a41ea26c4ce9.slice/crio-a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6 WatchSource:0}: Error finding container a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6: Status 404 returned error can't find the container with id a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6 Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.339182 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6"} Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.343340 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" exitCode=0 Jan 24 07:40:03 crc kubenswrapper[4675]: I0124 07:40:03.343385 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} Jan 24 07:40:04 crc kubenswrapper[4675]: I0124 07:40:04.354547 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" exitCode=0 Jan 24 07:40:04 crc kubenswrapper[4675]: I0124 07:40:04.354841 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.366398 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.369085 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerStarted","Data":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} Jan 24 07:40:05 crc kubenswrapper[4675]: I0124 07:40:05.422635 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-77wsd" podStartSLOduration=3.156957534 podStartE2EDuration="8.422615352s" podCreationTimestamp="2026-01-24 07:39:57 +0000 UTC" firstStartedPulling="2026-01-24 07:39:59.297033126 +0000 UTC m=+2800.593138349" lastFinishedPulling="2026-01-24 07:40:04.562690934 +0000 UTC m=+2805.858796167" observedRunningTime="2026-01-24 07:40:05.418891022 +0000 UTC m=+2806.714996255" watchObservedRunningTime="2026-01-24 07:40:05.422615352 +0000 UTC m=+2806.718720595" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.194023 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.194527 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.406201 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" exitCode=0 Jan 24 07:40:08 crc kubenswrapper[4675]: I0124 07:40:08.406276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} Jan 24 07:40:09 crc kubenswrapper[4675]: I0124 07:40:09.246993 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-77wsd" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" probeResult="failure" output=< Jan 24 07:40:09 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:40:09 crc kubenswrapper[4675]: > Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.426252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerStarted","Data":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.447790 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z56sj" podStartSLOduration=5.553043548 podStartE2EDuration="10.447769382s" podCreationTimestamp="2026-01-24 07:40:00 +0000 UTC" firstStartedPulling="2026-01-24 07:40:04.358089707 +0000 UTC m=+2805.654194960" lastFinishedPulling="2026-01-24 07:40:09.252815581 +0000 UTC m=+2810.548920794" observedRunningTime="2026-01-24 07:40:10.44520905 +0000 UTC m=+2811.741314273" watchObservedRunningTime="2026-01-24 07:40:10.447769382 +0000 UTC m=+2811.743874605" Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.548535 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:10 crc kubenswrapper[4675]: I0124 07:40:10.548593 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:11 crc kubenswrapper[4675]: I0124 07:40:11.591427 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z56sj" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" probeResult="failure" output=< Jan 24 07:40:11 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:40:11 crc kubenswrapper[4675]: > Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.241955 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.297852 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:18 crc kubenswrapper[4675]: I0124 07:40:18.478003 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.497860 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-77wsd" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" containerID="cri-o://ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" gracePeriod=2 Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.917050 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988163 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988242 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.988263 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") pod \"8ccf64a6-f29e-4977-84d5-597321d0aa40\" (UID: \"8ccf64a6-f29e-4977-84d5-597321d0aa40\") " Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.989428 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities" (OuterVolumeSpecName: "utilities") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:19 crc kubenswrapper[4675]: I0124 07:40:19.998193 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w" (OuterVolumeSpecName: "kube-api-access-4248w") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "kube-api-access-4248w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.008548 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ccf64a6-f29e-4977-84d5-597321d0aa40" (UID: "8ccf64a6-f29e-4977-84d5-597321d0aa40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090121 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4248w\" (UniqueName: \"kubernetes.io/projected/8ccf64a6-f29e-4977-84d5-597321d0aa40-kube-api-access-4248w\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090163 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.090172 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccf64a6-f29e-4977-84d5-597321d0aa40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509808 4675 generic.go:334] "Generic (PLEG): container finished" podID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" exitCode=0 Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509882 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wsd" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509886 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.509982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wsd" event={"ID":"8ccf64a6-f29e-4977-84d5-597321d0aa40","Type":"ContainerDied","Data":"1267f1241a6b5e2d15d5b790f6e8a19388ac394099a6d4aa48d1de2bd8be8847"} Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.510009 4675 scope.go:117] "RemoveContainer" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.560212 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.568533 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wsd"] Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.569114 4675 scope.go:117] "RemoveContainer" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.601072 4675 scope.go:117] "RemoveContainer" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.607063 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.651766 4675 scope.go:117] "RemoveContainer" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.653275 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": container with ID starting with ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff not found: ID does not exist" containerID="ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653315 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff"} err="failed to get container status \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": rpc error: code = NotFound desc = could not find container \"ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff\": container with ID starting with ccf243299ccae81a5ff00539b5ffeadfa59df86ac277124da6f5b2645db763ff not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653344 4675 scope.go:117] "RemoveContainer" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.653822 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": container with ID starting with 743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430 not found: ID does not exist" containerID="743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653850 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430"} err="failed to get container status \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": rpc error: code = NotFound desc = could not find container \"743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430\": container with ID starting with 743dc7b734bf2a69ea453de5d0d883ce35152b8e77bf92fee995169ddacd8430 not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.653864 4675 scope.go:117] "RemoveContainer" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: E0124 07:40:20.654163 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": container with ID starting with 1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687 not found: ID does not exist" containerID="1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.654191 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687"} err="failed to get container status \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": rpc error: code = NotFound desc = could not find container \"1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687\": container with ID starting with 1ea0eb38bd55f38e7ea9f6c6b9f894f4503b81a6714e70cb899515910d0d2687 not found: ID does not exist" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.657585 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:20 crc kubenswrapper[4675]: I0124 07:40:20.952605 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" path="/var/lib/kubelet/pods/8ccf64a6-f29e-4977-84d5-597321d0aa40/volumes" Jan 24 07:40:22 crc kubenswrapper[4675]: I0124 07:40:22.874617 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:22 crc kubenswrapper[4675]: I0124 07:40:22.874874 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z56sj" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" containerID="cri-o://985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" gracePeriod=2 Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.315143 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353498 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353781 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.353838 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") pod \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\" (UID: \"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9\") " Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.354682 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities" (OuterVolumeSpecName: "utilities") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.367863 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f" (OuterVolumeSpecName: "kube-api-access-9cb4f") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "kube-api-access-9cb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.418240 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" (UID: "f6ca4082-bd83-4643-bbe5-a41ea26c4ce9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456037 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456073 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.456087 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cb4f\" (UniqueName: \"kubernetes.io/projected/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9-kube-api-access-9cb4f\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538492 4675 generic.go:334] "Generic (PLEG): container finished" podID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" exitCode=0 Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538628 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56sj" event={"ID":"f6ca4082-bd83-4643-bbe5-a41ea26c4ce9","Type":"ContainerDied","Data":"a86660e41ccf58e2de6f41f98cfba77ba9209e0617bdb6eb6ed76595d8f8e9a6"} Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.538660 4675 scope.go:117] "RemoveContainer" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.539050 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56sj" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.573221 4675 scope.go:117] "RemoveContainer" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.621420 4675 scope.go:117] "RemoveContainer" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.626102 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.639606 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z56sj"] Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.654041 4675 scope.go:117] "RemoveContainer" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655059 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": container with ID starting with 985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b not found: ID does not exist" containerID="985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655104 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b"} err="failed to get container status \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": rpc error: code = NotFound desc = could not find container \"985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b\": container with ID starting with 985811abeee0af7dd8670a7bb4f1db3629592202181f8426d3f9cea0b1ed8d3b not found: ID does not exist" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655131 4675 scope.go:117] "RemoveContainer" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655429 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": container with ID starting with 68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b not found: ID does not exist" containerID="68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655464 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b"} err="failed to get container status \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": rpc error: code = NotFound desc = could not find container \"68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b\": container with ID starting with 68589a5d5dcce329fefeba329d326e8ae8bd44a989e23356728a63e370d3398b not found: ID does not exist" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655487 4675 scope.go:117] "RemoveContainer" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: E0124 07:40:23.655684 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": container with ID starting with 4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4 not found: ID does not exist" containerID="4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4" Jan 24 07:40:23 crc kubenswrapper[4675]: I0124 07:40:23.655708 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4"} err="failed to get container status \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": rpc error: code = NotFound desc = could not find container \"4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4\": container with ID starting with 4c81dcab1316d7806d3f01c09dd407a06bda520d012299b3c7c68c4445dd96f4 not found: ID does not exist" Jan 24 07:40:24 crc kubenswrapper[4675]: I0124 07:40:24.955089 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" path="/var/lib/kubelet/pods/f6ca4082-bd83-4643-bbe5-a41ea26c4ce9/volumes" Jan 24 07:40:47 crc kubenswrapper[4675]: I0124 07:40:47.774265 4675 generic.go:334] "Generic (PLEG): container finished" podID="f4024f70-df50-442c-bcd5-c599d978277c" containerID="c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b" exitCode=0 Jan 24 07:40:47 crc kubenswrapper[4675]: I0124 07:40:47.774330 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerDied","Data":"c0366ed663d9377d6d87bee264a658d87b4c1d06b5f76d0f3bcdc27e1803092b"} Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.234256 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428267 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428347 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428385 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428415 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428433 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428460 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428479 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428552 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.428570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") pod \"f4024f70-df50-442c-bcd5-c599d978277c\" (UID: \"f4024f70-df50-442c-bcd5-c599d978277c\") " Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.441347 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.443475 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69" (OuterVolumeSpecName: "kube-api-access-xbb69") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "kube-api-access-xbb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.465374 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.467270 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.476105 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.478636 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.479328 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.492230 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory" (OuterVolumeSpecName: "inventory") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.494966 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f4024f70-df50-442c-bcd5-c599d978277c" (UID: "f4024f70-df50-442c-bcd5-c599d978277c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532004 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbb69\" (UniqueName: \"kubernetes.io/projected/f4024f70-df50-442c-bcd5-c599d978277c-kube-api-access-xbb69\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532045 4675 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f4024f70-df50-442c-bcd5-c599d978277c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532054 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532063 4675 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532074 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532083 4675 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532092 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532102 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.532110 4675 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f4024f70-df50-442c-bcd5-c599d978277c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.804579 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" event={"ID":"f4024f70-df50-442c-bcd5-c599d978277c","Type":"ContainerDied","Data":"2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b"} Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.804649 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2ca50b9a099a17d75a3291c986ddd856869d868abc722788811a80f95d193b" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.805003 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k8fng" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946388 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946914 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946941 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946956 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946965 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.946990 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.946999 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947017 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947025 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947035 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947043 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947065 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947073 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="extract-utilities" Jan 24 07:40:49 crc kubenswrapper[4675]: E0124 07:40:49.947091 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947100 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="extract-content" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947308 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccf64a6-f29e-4977-84d5-597321d0aa40" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947326 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ca4082-bd83-4643-bbe5-a41ea26c4ce9" containerName="registry-server" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.947342 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4024f70-df50-442c-bcd5-c599d978277c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.948137 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.950677 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gn6ht" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.950906 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951033 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951065 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.951146 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 24 07:40:49 crc kubenswrapper[4675]: I0124 07:40:49.973144 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.143378 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.143982 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144021 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144089 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144209 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.144285 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.263959 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264006 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264039 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264095 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.264122 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.265395 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.265779 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.270459 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.270662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271310 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271387 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.271792 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.274003 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.286435 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:50 crc kubenswrapper[4675]: I0124 07:40:50.578010 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:40:51 crc kubenswrapper[4675]: W0124 07:40:51.153212 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode47d7738_3361_429e_90f9_02dee4f0052e.slice/crio-3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d WatchSource:0}: Error finding container 3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d: Status 404 returned error can't find the container with id 3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d Jan 24 07:40:51 crc kubenswrapper[4675]: I0124 07:40:51.153264 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx"] Jan 24 07:40:51 crc kubenswrapper[4675]: I0124 07:40:51.824284 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerStarted","Data":"3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d"} Jan 24 07:40:52 crc kubenswrapper[4675]: I0124 07:40:52.836276 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerStarted","Data":"bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e"} Jan 24 07:40:52 crc kubenswrapper[4675]: I0124 07:40:52.865923 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" podStartSLOduration=3.049226256 podStartE2EDuration="3.865903134s" podCreationTimestamp="2026-01-24 07:40:49 +0000 UTC" firstStartedPulling="2026-01-24 07:40:51.156548304 +0000 UTC m=+2852.452653537" lastFinishedPulling="2026-01-24 07:40:51.973225182 +0000 UTC m=+2853.269330415" observedRunningTime="2026-01-24 07:40:52.856361172 +0000 UTC m=+2854.152466425" watchObservedRunningTime="2026-01-24 07:40:52.865903134 +0000 UTC m=+2854.162008367" Jan 24 07:41:08 crc kubenswrapper[4675]: I0124 07:41:08.630239 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:41:08 crc kubenswrapper[4675]: I0124 07:41:08.630902 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:41:38 crc kubenswrapper[4675]: I0124 07:41:38.634154 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:41:38 crc kubenswrapper[4675]: I0124 07:41:38.634735 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.629797 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.630511 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.630574 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.631635 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:42:08 crc kubenswrapper[4675]: I0124 07:42:08.631779 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" gracePeriod=600 Jan 24 07:42:09 crc kubenswrapper[4675]: E0124 07:42:09.480573 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780304 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" exitCode=0 Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780445 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd"} Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.780511 4675 scope.go:117] "RemoveContainer" containerID="d342d159e0fcea37d58e62d2f8a5fb00148cb6bd39bdf7f545a9067214bc08b4" Jan 24 07:42:09 crc kubenswrapper[4675]: I0124 07:42:09.781835 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:09 crc kubenswrapper[4675]: E0124 07:42:09.782295 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:23 crc kubenswrapper[4675]: I0124 07:42:23.942165 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:23 crc kubenswrapper[4675]: E0124 07:42:23.942986 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:37 crc kubenswrapper[4675]: I0124 07:42:37.942498 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:37 crc kubenswrapper[4675]: E0124 07:42:37.944097 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:42:51 crc kubenswrapper[4675]: I0124 07:42:51.942368 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:42:51 crc kubenswrapper[4675]: E0124 07:42:51.943220 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:04 crc kubenswrapper[4675]: I0124 07:43:04.944001 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:04 crc kubenswrapper[4675]: E0124 07:43:04.944709 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:16 crc kubenswrapper[4675]: I0124 07:43:16.942375 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:16 crc kubenswrapper[4675]: E0124 07:43:16.943054 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:29 crc kubenswrapper[4675]: I0124 07:43:29.942839 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:29 crc kubenswrapper[4675]: E0124 07:43:29.943774 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.181209 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.183984 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.190669 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299390 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299896 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.299932 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402111 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402231 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402442 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402662 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.402801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.447987 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"redhat-operators-cdqth\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.504216 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:44 crc kubenswrapper[4675]: I0124 07:43:44.942451 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:44 crc kubenswrapper[4675]: E0124 07:43:44.942958 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.003602 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.615934 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" exitCode=0 Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.616026 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2"} Jan 24 07:43:45 crc kubenswrapper[4675]: I0124 07:43:45.616209 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"0e2ea1a793a15a8b53384e8d6fe49b51ff8604d90512645824867c9af7f24df5"} Jan 24 07:43:46 crc kubenswrapper[4675]: I0124 07:43:46.629263 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} Jan 24 07:43:50 crc kubenswrapper[4675]: I0124 07:43:50.679487 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" exitCode=0 Jan 24 07:43:50 crc kubenswrapper[4675]: I0124 07:43:50.679845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} Jan 24 07:43:52 crc kubenswrapper[4675]: I0124 07:43:52.700222 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerStarted","Data":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} Jan 24 07:43:52 crc kubenswrapper[4675]: I0124 07:43:52.727502 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdqth" podStartSLOduration=2.154755847 podStartE2EDuration="8.727482191s" podCreationTimestamp="2026-01-24 07:43:44 +0000 UTC" firstStartedPulling="2026-01-24 07:43:45.617288932 +0000 UTC m=+3026.913394155" lastFinishedPulling="2026-01-24 07:43:52.190015276 +0000 UTC m=+3033.486120499" observedRunningTime="2026-01-24 07:43:52.72044931 +0000 UTC m=+3034.016554533" watchObservedRunningTime="2026-01-24 07:43:52.727482191 +0000 UTC m=+3034.023587404" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.505156 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.505486 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.693594 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.695900 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.707509 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810738 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810804 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.810860 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.913225 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.913806 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914012 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914246 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.914690 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:54 crc kubenswrapper[4675]: I0124 07:43:54.935984 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"certified-operators-7b874\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.032262 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.556399 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" probeResult="failure" output=< Jan 24 07:43:55 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:43:55 crc kubenswrapper[4675]: > Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.668771 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:43:55 crc kubenswrapper[4675]: W0124 07:43:55.675567 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58c4feae_4844_4d57_abb6_e3128e04b0d8.slice/crio-0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f WatchSource:0}: Error finding container 0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f: Status 404 returned error can't find the container with id 0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f Jan 24 07:43:55 crc kubenswrapper[4675]: I0124 07:43:55.767819 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f"} Jan 24 07:43:56 crc kubenswrapper[4675]: I0124 07:43:56.778445 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" exitCode=0 Jan 24 07:43:56 crc kubenswrapper[4675]: I0124 07:43:56.778833 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100"} Jan 24 07:43:57 crc kubenswrapper[4675]: I0124 07:43:57.788103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} Jan 24 07:43:58 crc kubenswrapper[4675]: I0124 07:43:58.799611 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" exitCode=0 Jan 24 07:43:58 crc kubenswrapper[4675]: I0124 07:43:58.799760 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.810440 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerStarted","Data":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.833070 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b874" podStartSLOduration=3.13977233 podStartE2EDuration="5.833055078s" podCreationTimestamp="2026-01-24 07:43:54 +0000 UTC" firstStartedPulling="2026-01-24 07:43:56.781203127 +0000 UTC m=+3038.077308350" lastFinishedPulling="2026-01-24 07:43:59.474485875 +0000 UTC m=+3040.770591098" observedRunningTime="2026-01-24 07:43:59.831794817 +0000 UTC m=+3041.127900040" watchObservedRunningTime="2026-01-24 07:43:59.833055078 +0000 UTC m=+3041.129160301" Jan 24 07:43:59 crc kubenswrapper[4675]: I0124 07:43:59.949028 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:43:59 crc kubenswrapper[4675]: E0124 07:43:59.949288 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.032786 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.033652 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.094767 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.559977 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" probeResult="failure" output=< Jan 24 07:44:05 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:44:05 crc kubenswrapper[4675]: > Jan 24 07:44:05 crc kubenswrapper[4675]: I0124 07:44:05.963004 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:06 crc kubenswrapper[4675]: I0124 07:44:06.011443 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:07 crc kubenswrapper[4675]: I0124 07:44:07.896908 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b874" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" containerID="cri-o://29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" gracePeriod=2 Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.373629 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480436 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480523 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.480570 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") pod \"58c4feae-4844-4d57-abb6-e3128e04b0d8\" (UID: \"58c4feae-4844-4d57-abb6-e3128e04b0d8\") " Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.481365 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities" (OuterVolumeSpecName: "utilities") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.486096 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w" (OuterVolumeSpecName: "kube-api-access-94d6w") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "kube-api-access-94d6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.526083 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58c4feae-4844-4d57-abb6-e3128e04b0d8" (UID: "58c4feae-4844-4d57-abb6-e3128e04b0d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583138 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583181 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94d6w\" (UniqueName: \"kubernetes.io/projected/58c4feae-4844-4d57-abb6-e3128e04b0d8-kube-api-access-94d6w\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.583196 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58c4feae-4844-4d57-abb6-e3128e04b0d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909058 4675 generic.go:334] "Generic (PLEG): container finished" podID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" exitCode=0 Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909099 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909127 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b874" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909152 4675 scope.go:117] "RemoveContainer" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.909139 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b874" event={"ID":"58c4feae-4844-4d57-abb6-e3128e04b0d8","Type":"ContainerDied","Data":"0c64c1b4fe0d7407a2bf4f2f1730f8f1c91229b38701b50edc9a470e90e0be9f"} Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.934023 4675 scope.go:117] "RemoveContainer" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.968972 4675 scope.go:117] "RemoveContainer" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.983603 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:08 crc kubenswrapper[4675]: I0124 07:44:08.994034 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b874"] Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012303 4675 scope.go:117] "RemoveContainer" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.012838 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": container with ID starting with 29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3 not found: ID does not exist" containerID="29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012869 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3"} err="failed to get container status \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": rpc error: code = NotFound desc = could not find container \"29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3\": container with ID starting with 29dda9de77c1eaffa9fbe55169ff35630bf0b938bbe812179f521c053cc32cc3 not found: ID does not exist" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.012889 4675 scope.go:117] "RemoveContainer" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.013147 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": container with ID starting with cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95 not found: ID does not exist" containerID="cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013180 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95"} err="failed to get container status \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": rpc error: code = NotFound desc = could not find container \"cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95\": container with ID starting with cc48ee8b3b3858f4136828805b80db4669e264058e8fbfcc5816db42c8b82c95 not found: ID does not exist" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013199 4675 scope.go:117] "RemoveContainer" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:09 crc kubenswrapper[4675]: E0124 07:44:09.013541 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": container with ID starting with 416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100 not found: ID does not exist" containerID="416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100" Jan 24 07:44:09 crc kubenswrapper[4675]: I0124 07:44:09.013564 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100"} err="failed to get container status \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": rpc error: code = NotFound desc = could not find container \"416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100\": container with ID starting with 416445ace1c0a2e4fdf58b7221fb008ee7b02d36de939f470470cf598b9db100 not found: ID does not exist" Jan 24 07:44:10 crc kubenswrapper[4675]: I0124 07:44:10.957206 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" path="/var/lib/kubelet/pods/58c4feae-4844-4d57-abb6-e3128e04b0d8/volumes" Jan 24 07:44:11 crc kubenswrapper[4675]: I0124 07:44:11.943606 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:11 crc kubenswrapper[4675]: E0124 07:44:11.944211 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:14 crc kubenswrapper[4675]: I0124 07:44:14.557113 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:14 crc kubenswrapper[4675]: I0124 07:44:14.624356 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:15 crc kubenswrapper[4675]: I0124 07:44:15.384011 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.002382 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdqth" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" containerID="cri-o://2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" gracePeriod=2 Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.441980 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569482 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569631 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.569675 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") pod \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\" (UID: \"2ffaa9bd-d5dd-4a69-a74b-239be16a2199\") " Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.570483 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities" (OuterVolumeSpecName: "utilities") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.575322 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85" (OuterVolumeSpecName: "kube-api-access-7vs85") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "kube-api-access-7vs85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.669291 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ffaa9bd-d5dd-4a69-a74b-239be16a2199" (UID: "2ffaa9bd-d5dd-4a69-a74b-239be16a2199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671873 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vs85\" (UniqueName: \"kubernetes.io/projected/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-kube-api-access-7vs85\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671906 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:16 crc kubenswrapper[4675]: I0124 07:44:16.671916 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ffaa9bd-d5dd-4a69-a74b-239be16a2199-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014699 4675 generic.go:334] "Generic (PLEG): container finished" podID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" exitCode=0 Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014798 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.015068 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdqth" event={"ID":"2ffaa9bd-d5dd-4a69-a74b-239be16a2199","Type":"ContainerDied","Data":"0e2ea1a793a15a8b53384e8d6fe49b51ff8604d90512645824867c9af7f24df5"} Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.014853 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdqth" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.015226 4675 scope.go:117] "RemoveContainer" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.046751 4675 scope.go:117] "RemoveContainer" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.075298 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.086080 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdqth"] Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.096747 4675 scope.go:117] "RemoveContainer" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.145944 4675 scope.go:117] "RemoveContainer" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.146363 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": container with ID starting with 2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb not found: ID does not exist" containerID="2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.146454 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb"} err="failed to get container status \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": rpc error: code = NotFound desc = could not find container \"2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb\": container with ID starting with 2230e14275e2878c566ac68c01f8dac647a94df6233278bb8f6c18b994fbd6bb not found: ID does not exist" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.146538 4675 scope.go:117] "RemoveContainer" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.147453 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": container with ID starting with 15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215 not found: ID does not exist" containerID="15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.147566 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215"} err="failed to get container status \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": rpc error: code = NotFound desc = could not find container \"15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215\": container with ID starting with 15d652e552c0b9621ff121e3c27c92ad149d2c4aeca2574afd44109396383215 not found: ID does not exist" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.147630 4675 scope.go:117] "RemoveContainer" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: E0124 07:44:17.148190 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": container with ID starting with 10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2 not found: ID does not exist" containerID="10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2" Jan 24 07:44:17 crc kubenswrapper[4675]: I0124 07:44:17.148239 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2"} err="failed to get container status \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": rpc error: code = NotFound desc = could not find container \"10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2\": container with ID starting with 10df46875fadc972c711a402c04b2ca2a67bf011d0fa5c4c18bb6e6b6c44eab2 not found: ID does not exist" Jan 24 07:44:18 crc kubenswrapper[4675]: I0124 07:44:18.959305 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" path="/var/lib/kubelet/pods/2ffaa9bd-d5dd-4a69-a74b-239be16a2199/volumes" Jan 24 07:44:22 crc kubenswrapper[4675]: I0124 07:44:22.943608 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:22 crc kubenswrapper[4675]: E0124 07:44:22.944357 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:36 crc kubenswrapper[4675]: I0124 07:44:36.942528 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:36 crc kubenswrapper[4675]: E0124 07:44:36.943184 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:44:41 crc kubenswrapper[4675]: I0124 07:44:41.230140 4675 generic.go:334] "Generic (PLEG): container finished" podID="e47d7738-3361-429e-90f9-02dee4f0052e" containerID="bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e" exitCode=0 Jan 24 07:44:41 crc kubenswrapper[4675]: I0124 07:44:41.230238 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerDied","Data":"bfbdcdb935d25f2cf70f9a3ec57607a22f91330bedca2616e1a718dd2768d23e"} Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.689628 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744875 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744931 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.744993 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745025 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745217 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745252 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.745323 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") pod \"e47d7738-3361-429e-90f9-02dee4f0052e\" (UID: \"e47d7738-3361-429e-90f9-02dee4f0052e\") " Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.759938 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.765621 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765" (OuterVolumeSpecName: "kube-api-access-sw765") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "kube-api-access-sw765". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.774526 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.777267 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory" (OuterVolumeSpecName: "inventory") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.777874 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.789484 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.789761 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e47d7738-3361-429e-90f9-02dee4f0052e" (UID: "e47d7738-3361-429e-90f9-02dee4f0052e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848530 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw765\" (UniqueName: \"kubernetes.io/projected/e47d7738-3361-429e-90f9-02dee4f0052e-kube-api-access-sw765\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848566 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848580 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848593 4675 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848605 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848616 4675 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:42 crc kubenswrapper[4675]: I0124 07:44:42.848629 4675 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47d7738-3361-429e-90f9-02dee4f0052e-inventory\") on node \"crc\" DevicePath \"\"" Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254332 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" event={"ID":"e47d7738-3361-429e-90f9-02dee4f0052e","Type":"ContainerDied","Data":"3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d"} Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254687 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3516bc17165950097d59c7f24b005cc86339cda58f74ff42e2f696b96abd3f5d" Jan 24 07:44:43 crc kubenswrapper[4675]: I0124 07:44:43.254475 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx" Jan 24 07:44:51 crc kubenswrapper[4675]: I0124 07:44:51.942284 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:44:51 crc kubenswrapper[4675]: E0124 07:44:51.944381 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.149942 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151009 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151029 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151055 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151064 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151091 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151100 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-content" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151111 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151119 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151138 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151146 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151160 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151170 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="extract-utilities" Jan 24 07:45:00 crc kubenswrapper[4675]: E0124 07:45:00.151187 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151194 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151411 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c4feae-4844-4d57-abb6-e3128e04b0d8" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151438 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47d7738-3361-429e-90f9-02dee4f0052e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.151462 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffaa9bd-d5dd-4a69-a74b-239be16a2199" containerName="registry-server" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.152278 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.160053 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.160053 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.173797 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.208751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.208828 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.209024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310784 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310868 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.310920 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.312044 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.333453 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.333801 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"collect-profiles-29487345-tg86x\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:00 crc kubenswrapper[4675]: I0124 07:45:00.474884 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.004623 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x"] Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.511871 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerStarted","Data":"0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb"} Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.514179 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerStarted","Data":"1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d"} Jan 24 07:45:01 crc kubenswrapper[4675]: I0124 07:45:01.542056 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" podStartSLOduration=1.542038815 podStartE2EDuration="1.542038815s" podCreationTimestamp="2026-01-24 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:45:01.536995173 +0000 UTC m=+3102.833100396" watchObservedRunningTime="2026-01-24 07:45:01.542038815 +0000 UTC m=+3102.838144038" Jan 24 07:45:02 crc kubenswrapper[4675]: I0124 07:45:02.521834 4675 generic.go:334] "Generic (PLEG): container finished" podID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerID="0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb" exitCode=0 Jan 24 07:45:02 crc kubenswrapper[4675]: I0124 07:45:02.522528 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerDied","Data":"0d6f4c897e3fd3da783cc55d6eee5105fa978a22baf600d528d5387a2c99beeb"} Jan 24 07:45:03 crc kubenswrapper[4675]: I0124 07:45:03.854300 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:03 crc kubenswrapper[4675]: I0124 07:45:03.943457 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:03 crc kubenswrapper[4675]: E0124 07:45:03.944089 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012429 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012505 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.012590 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") pod \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\" (UID: \"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd\") " Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.013668 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.024573 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.027019 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7" (OuterVolumeSpecName: "kube-api-access-fbxw7") pod "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" (UID: "9c752fa6-1fb5-4e20-a186-fe950e9fc3bd"). InnerVolumeSpecName "kube-api-access-fbxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116623 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116661 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxw7\" (UniqueName: \"kubernetes.io/projected/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-kube-api-access-fbxw7\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.116670 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c752fa6-1fb5-4e20-a186-fe950e9fc3bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.547904 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" event={"ID":"9c752fa6-1fb5-4e20-a186-fe950e9fc3bd","Type":"ContainerDied","Data":"1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d"} Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.547973 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1874a7e18cac42a442f2f74841584b88f2922de05742b97b2f3ab3b8ad02b02d" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.548047 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487345-tg86x" Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.641755 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.654161 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487300-n24zh"] Jan 24 07:45:04 crc kubenswrapper[4675]: I0124 07:45:04.958440 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2777ca-be51-4dc3-b7da-d84bd7ca16c4" path="/var/lib/kubelet/pods/df2777ca-be51-4dc3-b7da-d84bd7ca16c4/volumes" Jan 24 07:45:18 crc kubenswrapper[4675]: I0124 07:45:18.987542 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:18 crc kubenswrapper[4675]: E0124 07:45:18.988917 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.649553 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:28 crc kubenswrapper[4675]: E0124 07:45:28.650512 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.650528 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.650779 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c752fa6-1fb5-4e20-a186-fe950e9fc3bd" containerName="collect-profiles" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.651513 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.654269 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.654552 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zffgp" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.659611 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.660982 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.661435 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764091 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764524 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.764684 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765024 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765306 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765484 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765679 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765822 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.765977 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868066 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868141 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868175 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868296 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868337 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868396 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868430 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868502 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.868589 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.869554 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.870451 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.870805 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.871160 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.871363 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.877430 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.877439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.891610 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.896767 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.904425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " pod="openstack/tempest-tests-tempest" Jan 24 07:45:28 crc kubenswrapper[4675]: I0124 07:45:28.984748 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.480513 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.487282 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 24 07:45:29 crc kubenswrapper[4675]: I0124 07:45:29.846534 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerStarted","Data":"b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084"} Jan 24 07:45:31 crc kubenswrapper[4675]: I0124 07:45:31.950800 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:31 crc kubenswrapper[4675]: E0124 07:45:31.951303 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:41 crc kubenswrapper[4675]: I0124 07:45:41.673077 4675 scope.go:117] "RemoveContainer" containerID="2ee1de4c569b0dfae84a9127d5e07bf0bf62a91389eaf5b8b6361fce4ef2d02f" Jan 24 07:45:44 crc kubenswrapper[4675]: I0124 07:45:44.943948 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:44 crc kubenswrapper[4675]: E0124 07:45:44.944830 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:45:55 crc kubenswrapper[4675]: I0124 07:45:55.943252 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:45:55 crc kubenswrapper[4675]: E0124 07:45:55.944108 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:06 crc kubenswrapper[4675]: I0124 07:46:06.943025 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:06 crc kubenswrapper[4675]: E0124 07:46:06.943643 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.968455 4675 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.970396 4675 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2x7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0e021dd7-397f-4546-a38b-c8c13a1c830d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 24 07:46:10 crc kubenswrapper[4675]: E0124 07:46:10.971634 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" Jan 24 07:46:11 crc kubenswrapper[4675]: E0124 07:46:11.234311 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" Jan 24 07:46:19 crc kubenswrapper[4675]: I0124 07:46:19.943053 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:19 crc kubenswrapper[4675]: E0124 07:46:19.943780 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:23 crc kubenswrapper[4675]: I0124 07:46:23.420524 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 24 07:46:25 crc kubenswrapper[4675]: I0124 07:46:25.367283 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerStarted","Data":"385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233"} Jan 24 07:46:25 crc kubenswrapper[4675]: I0124 07:46:25.403147 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.464930784 podStartE2EDuration="58.403130368s" podCreationTimestamp="2026-01-24 07:45:27 +0000 UTC" firstStartedPulling="2026-01-24 07:45:29.480115271 +0000 UTC m=+3130.776220534" lastFinishedPulling="2026-01-24 07:46:23.418314895 +0000 UTC m=+3184.714420118" observedRunningTime="2026-01-24 07:46:25.393211337 +0000 UTC m=+3186.689316560" watchObservedRunningTime="2026-01-24 07:46:25.403130368 +0000 UTC m=+3186.699235591" Jan 24 07:46:32 crc kubenswrapper[4675]: I0124 07:46:32.943172 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:32 crc kubenswrapper[4675]: E0124 07:46:32.944050 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:46:47 crc kubenswrapper[4675]: I0124 07:46:47.943100 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:46:47 crc kubenswrapper[4675]: E0124 07:46:47.943776 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:47:02 crc kubenswrapper[4675]: I0124 07:47:02.943032 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:47:02 crc kubenswrapper[4675]: E0124 07:47:02.943845 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:47:16 crc kubenswrapper[4675]: I0124 07:47:16.942543 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:47:17 crc kubenswrapper[4675]: I0124 07:47:17.877661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} Jan 24 07:47:54 crc kubenswrapper[4675]: I0124 07:47:54.234658 4675 generic.go:334] "Generic (PLEG): container finished" podID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerID="385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233" exitCode=0 Jan 24 07:47:54 crc kubenswrapper[4675]: I0124 07:47:54.234845 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerDied","Data":"385c10b2b55e4de5d8ab2c6942a73ce0435169e041a1198367e1b28172ea5233"} Jan 24 07:47:55 crc kubenswrapper[4675]: I0124 07:47:55.947167 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090607 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090742 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090800 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090847 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090873 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.090969 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.091012 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.091095 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") pod \"0e021dd7-397f-4546-a38b-c8c13a1c830d\" (UID: \"0e021dd7-397f-4546-a38b-c8c13a1c830d\") " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.092050 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data" (OuterVolumeSpecName: "config-data") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.095694 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.095951 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.106325 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.109043 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r" (OuterVolumeSpecName: "kube-api-access-n2x7r") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "kube-api-access-n2x7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.127518 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.131786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.155908 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.172848 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0e021dd7-397f-4546-a38b-c8c13a1c830d" (UID: "0e021dd7-397f-4546-a38b-c8c13a1c830d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193859 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2x7r\" (UniqueName: \"kubernetes.io/projected/0e021dd7-397f-4546-a38b-c8c13a1c830d-kube-api-access-n2x7r\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193916 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193932 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193946 4675 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e021dd7-397f-4546-a38b-c8c13a1c830d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193959 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e021dd7-397f-4546-a38b-c8c13a1c830d-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.193991 4675 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.194002 4675 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.194042 4675 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.196060 4675 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e021dd7-397f-4546-a38b-c8c13a1c830d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.215169 4675 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259061 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e021dd7-397f-4546-a38b-c8c13a1c830d","Type":"ContainerDied","Data":"b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084"} Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259098 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58f40ef8b59d9e13d49226574fe9e8046a0f1e06fd832dbb76212fb15d47084" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.259151 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 24 07:47:56 crc kubenswrapper[4675]: I0124 07:47:56.298415 4675 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.690768 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:02 crc kubenswrapper[4675]: E0124 07:48:02.692995 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.693106 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.720169 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e021dd7-397f-4546-a38b-c8c13a1c830d" containerName="tempest-tests-tempest-tests-runner" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.721115 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.724091 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.738246 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zffgp" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.824920 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.825052 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926199 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926319 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.926816 4675 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.956834 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdvz\" (UniqueName: \"kubernetes.io/projected/6051ab9a-5c43-4757-a1ff-3f199dee0a79-kube-api-access-vkdvz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:02 crc kubenswrapper[4675]: I0124 07:48:02.977809 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6051ab9a-5c43-4757-a1ff-3f199dee0a79\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:03 crc kubenswrapper[4675]: I0124 07:48:03.061387 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 24 07:48:03 crc kubenswrapper[4675]: I0124 07:48:03.570378 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 24 07:48:03 crc kubenswrapper[4675]: W0124 07:48:03.592860 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6051ab9a_5c43_4757_a1ff_3f199dee0a79.slice/crio-b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999 WatchSource:0}: Error finding container b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999: Status 404 returned error can't find the container with id b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999 Jan 24 07:48:04 crc kubenswrapper[4675]: I0124 07:48:04.358339 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6051ab9a-5c43-4757-a1ff-3f199dee0a79","Type":"ContainerStarted","Data":"b2faa99ec00f9b711122fe80b2d38a0d7e703e25105a40a7ed0b6c99e65e4999"} Jan 24 07:48:05 crc kubenswrapper[4675]: I0124 07:48:05.369639 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6051ab9a-5c43-4757-a1ff-3f199dee0a79","Type":"ContainerStarted","Data":"2ecd5de1e849759e8ab9154c1d74fc7b627da686494e2580089cd17c90952a4d"} Jan 24 07:48:05 crc kubenswrapper[4675]: I0124 07:48:05.392861 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.370970201 podStartE2EDuration="3.392835733s" podCreationTimestamp="2026-01-24 07:48:02 +0000 UTC" firstStartedPulling="2026-01-24 07:48:03.608085585 +0000 UTC m=+3284.904190818" lastFinishedPulling="2026-01-24 07:48:04.629951077 +0000 UTC m=+3285.926056350" observedRunningTime="2026-01-24 07:48:05.383914776 +0000 UTC m=+3286.680019999" watchObservedRunningTime="2026-01-24 07:48:05.392835733 +0000 UTC m=+3286.688940966" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.210539 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.212871 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.217025 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk4k5"/"openshift-service-ca.crt" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.217148 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rk4k5"/"kube-root-ca.crt" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.244230 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.264460 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.264581 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366355 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366482 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.366869 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.408559 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"must-gather-w5wnw\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.529237 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:48:30 crc kubenswrapper[4675]: I0124 07:48:30.994053 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:48:30 crc kubenswrapper[4675]: W0124 07:48:30.999751 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod179bf7a3_0095_4057_b946_ac2ee02c99ef.slice/crio-9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19 WatchSource:0}: Error finding container 9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19: Status 404 returned error can't find the container with id 9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19 Jan 24 07:48:31 crc kubenswrapper[4675]: I0124 07:48:31.631064 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"9a67f0e263d9acd21bdc9b203979f276fd2c4ed7b1f2a6929a752ca8ae940f19"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.720105 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.720607 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerStarted","Data":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} Jan 24 07:48:40 crc kubenswrapper[4675]: I0124 07:48:40.749059 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" podStartSLOduration=1.9752995260000001 podStartE2EDuration="10.749036161s" podCreationTimestamp="2026-01-24 07:48:30 +0000 UTC" firstStartedPulling="2026-01-24 07:48:31.001765248 +0000 UTC m=+3312.297870471" lastFinishedPulling="2026-01-24 07:48:39.775501883 +0000 UTC m=+3321.071607106" observedRunningTime="2026-01-24 07:48:40.733549715 +0000 UTC m=+3322.029654938" watchObservedRunningTime="2026-01-24 07:48:40.749036161 +0000 UTC m=+3322.045141384" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.701065 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.705857 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.707420 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk4k5"/"default-dockercfg-grs5v" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.819943 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.820112 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.921778 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.921849 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.922019 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:44 crc kubenswrapper[4675]: I0124 07:48:44.938367 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"crc-debug-kcg4w\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:45 crc kubenswrapper[4675]: I0124 07:48:45.030158 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:48:45 crc kubenswrapper[4675]: W0124 07:48:45.081116 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f8fcc1d_4f4e_4756_a0ee_dad9db37efd0.slice/crio-765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e WatchSource:0}: Error finding container 765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e: Status 404 returned error can't find the container with id 765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e Jan 24 07:48:45 crc kubenswrapper[4675]: I0124 07:48:45.766583 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerStarted","Data":"765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e"} Jan 24 07:48:57 crc kubenswrapper[4675]: I0124 07:48:57.882663 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerStarted","Data":"c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd"} Jan 24 07:48:57 crc kubenswrapper[4675]: I0124 07:48:57.905383 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" podStartSLOduration=2.249167843 podStartE2EDuration="13.905352466s" podCreationTimestamp="2026-01-24 07:48:44 +0000 UTC" firstStartedPulling="2026-01-24 07:48:45.08592206 +0000 UTC m=+3326.382027283" lastFinishedPulling="2026-01-24 07:48:56.742106683 +0000 UTC m=+3338.038211906" observedRunningTime="2026-01-24 07:48:57.903314287 +0000 UTC m=+3339.199419510" watchObservedRunningTime="2026-01-24 07:48:57.905352466 +0000 UTC m=+3339.201457689" Jan 24 07:49:12 crc kubenswrapper[4675]: I0124 07:49:12.005843 4675 generic.go:334] "Generic (PLEG): container finished" podID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerID="c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd" exitCode=0 Jan 24 07:49:12 crc kubenswrapper[4675]: I0124 07:49:12.005923 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" event={"ID":"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0","Type":"ContainerDied","Data":"c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd"} Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.114212 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.157770 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.164762 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-kcg4w"] Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.186980 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") pod \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187142 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host" (OuterVolumeSpecName: "host") pod "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" (UID: "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187416 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") pod \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\" (UID: \"5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0\") " Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.187851 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.200946 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc" (OuterVolumeSpecName: "kube-api-access-d4gnc") pod "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" (UID: "5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0"). InnerVolumeSpecName "kube-api-access-d4gnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:49:13 crc kubenswrapper[4675]: I0124 07:49:13.289786 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4gnc\" (UniqueName: \"kubernetes.io/projected/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0-kube-api-access-d4gnc\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.021298 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765f5a2c75ce4bde49578f8726b2b1797d29bab8cd8ea6d1a31e4bd0cb2c136e" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.021341 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-kcg4w" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.391473 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:14 crc kubenswrapper[4675]: E0124 07:49:14.391930 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.391945 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.392154 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" containerName="container-00" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.392817 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.394669 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rk4k5"/"default-dockercfg-grs5v" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.511019 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.511349 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612459 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612511 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.612694 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.633568 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"crc-debug-df7rh\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.708582 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:14 crc kubenswrapper[4675]: I0124 07:49:14.951275 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0" path="/var/lib/kubelet/pods/5f8fcc1d-4f4e-4756-a0ee-dad9db37efd0/volumes" Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031102 4675 generic.go:334] "Generic (PLEG): container finished" podID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerID="6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e" exitCode=1 Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031142 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" event={"ID":"f989d7ff-d822-48c0-9434-7c435c1a3897","Type":"ContainerDied","Data":"6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e"} Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.031190 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" event={"ID":"f989d7ff-d822-48c0-9434-7c435c1a3897","Type":"ContainerStarted","Data":"adaf49d2751b18eb74cc36ca9f72bb613d4462194ef46032b228ad790f3431b5"} Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.064994 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:15 crc kubenswrapper[4675]: I0124 07:49:15.076160 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/crc-debug-df7rh"] Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.159475 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.255122 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") pod \"f989d7ff-d822-48c0-9434-7c435c1a3897\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.255208 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") pod \"f989d7ff-d822-48c0-9434-7c435c1a3897\" (UID: \"f989d7ff-d822-48c0-9434-7c435c1a3897\") " Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.256565 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host" (OuterVolumeSpecName: "host") pod "f989d7ff-d822-48c0-9434-7c435c1a3897" (UID: "f989d7ff-d822-48c0-9434-7c435c1a3897"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.269926 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b" (OuterVolumeSpecName: "kube-api-access-nsj5b") pod "f989d7ff-d822-48c0-9434-7c435c1a3897" (UID: "f989d7ff-d822-48c0-9434-7c435c1a3897"). InnerVolumeSpecName "kube-api-access-nsj5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.356804 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f989d7ff-d822-48c0-9434-7c435c1a3897-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.356835 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsj5b\" (UniqueName: \"kubernetes.io/projected/f989d7ff-d822-48c0-9434-7c435c1a3897-kube-api-access-nsj5b\") on node \"crc\" DevicePath \"\"" Jan 24 07:49:16 crc kubenswrapper[4675]: I0124 07:49:16.952576 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" path="/var/lib/kubelet/pods/f989d7ff-d822-48c0-9434-7c435c1a3897/volumes" Jan 24 07:49:17 crc kubenswrapper[4675]: I0124 07:49:17.055212 4675 scope.go:117] "RemoveContainer" containerID="6147927f4957f1dc3488aaede237f3f7588cb12024d74ec9e368eed77317985e" Jan 24 07:49:17 crc kubenswrapper[4675]: I0124 07:49:17.055761 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/crc-debug-df7rh" Jan 24 07:49:38 crc kubenswrapper[4675]: I0124 07:49:38.629761 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:49:38 crc kubenswrapper[4675]: I0124 07:49:38.630295 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.318945 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.454154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.509349 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.565997 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.769502 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.806582 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker-log/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.957571 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw_e9b8f08b-6ece-4b46-86c0-9c353d61c50c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:04 crc kubenswrapper[4675]: I0124 07:50:04.993870 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-central-agent/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.063055 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-notification-agent/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.201806 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/sg-core/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.225691 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/proxy-httpd/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.313269 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.392361 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api-log/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.546406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/probe/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.608674 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/cinder-scheduler/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.774786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-td879_bc52fac9-92d8-4555-b942-5f0dcb4bf6f3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:05 crc kubenswrapper[4675]: I0124 07:50:05.864212 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm_eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.003429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.193120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.227543 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/dnsmasq-dns/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.233126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49lhh_09d123a4-63c4-4269-b4e1-12932baedfd0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.439189 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-log/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.441105 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-httpd/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.633862 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-httpd/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.661034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-log/0.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.845249 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/1.log" Jan 24 07:50:06 crc kubenswrapper[4675]: I0124 07:50:06.896758 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.147539 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh_2d09456f-a230-420b-b288-c0dc3e8a6e22/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.224957 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon-log/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.316070 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vbvgv_27ad7637-701b-43e1-8440-0fd32522fc56/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.502202 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbffd67c8-k8gzb_405f0f26-61a4-4420-a147-43d7b86ebb8e/keystone-api/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.545054 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b742b344-80ea-48bf-bd28-8f1be00b4442/kube-state-metrics/0.log" Jan 24 07:50:07 crc kubenswrapper[4675]: I0124 07:50:07.727045 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq_d457c71e-ef41-4bf9-a59b-b3221df26b41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.021245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-api/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.078201 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-httpd/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.359214 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g_388e10c7-15e4-40d5-94ed-5c6612f7fbfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.630278 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.630326 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.891884 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-api/0.log" Jan 24 07:50:08 crc kubenswrapper[4675]: I0124 07:50:08.902198 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-log/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.002034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a3a43606-cba1-4fca-93c4-a1937ee449cc/nova-cell0-conductor-conductor/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.211870 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8afe3d83-5678-47e9-be7d-dfbf50fa5bc9/nova-cell1-conductor-conductor/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.312902 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a485ae65-6b4d-4cc6-9623-dc0b722f47e8/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.642684 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-log/0.log" Jan 24 07:50:09 crc kubenswrapper[4675]: I0124 07:50:09.646781 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k8fng_f4024f70-df50-442c-bcd5-c599d978277c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.033689 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_361b5d16-2808-40ad-88a0-f07fd4c33e3e/nova-scheduler-scheduler/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.064544 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.242263 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.247274 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/galera/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.451608 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.617592 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-metadata/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.766187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.860898 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/galera/0.log" Jan 24 07:50:10 crc kubenswrapper[4675]: I0124 07:50:10.881052 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb/openstackclient/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.056641 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2x2kb_b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1/ovn-controller/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.214409 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b7pft_1e0062ff-7e89-4c55-8796-de1c9e311dd2/openstack-network-exporter/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.314491 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.524342 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.527996 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.545043 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovs-vswitchd/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.761670 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/openstack-network-exporter/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.814327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7vbln_3e407880-d27a-4aa2-bb81-a87bb20ffcf1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:11 crc kubenswrapper[4675]: I0124 07:50:11.923362 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/ovn-northd/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.054899 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/openstack-network-exporter/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.121450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/ovsdbserver-nb/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.304323 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/ovsdbserver-sb/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.365011 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/openstack-network-exporter/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.580424 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-api/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.590812 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-log/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.741573 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:50:12 crc kubenswrapper[4675]: I0124 07:50:12.972773 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/rabbitmq/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.051112 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.094953 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.189621 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.293126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/rabbitmq/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.460327 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw_7b1b0570-d3a2-4029-bcf8-f41144ea0f06/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.576468 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zd8ln_55150857-7da2-4609-84be-9cbaa28141ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:13 crc kubenswrapper[4675]: I0124 07:50:13.881863 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q_774fb762-6506-4e0c-9732-9208f7802057/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.111934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ln8x2_3bc4008d-f8c6-4745-b524-d6136632cbfb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.165215 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wq6r9_191f15b0-8a3b-4dc4-bc49-9003c61619bf/ssh-known-hosts-edpm-deployment/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.483490 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-server/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.548021 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-httpd/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.601894 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sz46b_57da3a87-eeeb-47c8-b1bd-6a160dd81ff8/swift-ring-rebalance/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.792473 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:14 crc kubenswrapper[4675]: E0124 07:50:14.792869 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.792886 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.793071 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f989d7ff-d822-48c0-9434-7c435c1a3897" containerName="container-00" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.794389 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.808408 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.907510 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-auditor/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.960823 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-reaper/0.log" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966305 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:14 crc kubenswrapper[4675]: I0124 07:50:14.966337 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.048145 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067427 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067491 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.067612 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.068035 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.068488 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.104710 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"redhat-marketplace-wsg5j\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.130636 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.182640 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.230667 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-auditor/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.510939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.513623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.619374 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-auditor/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.625221 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-updater/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.655512 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.796145 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-replicator/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.823865 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-expirer/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.902094 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-server/0.log" Jan 24 07:50:15 crc kubenswrapper[4675]: I0124 07:50:15.980932 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/rsync/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.025481 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-updater/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.124754 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/swift-recon-cron/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.362934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx_e47d7738-3361-429e-90f9-02dee4f0052e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.497879 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e021dd7-397f-4546-a38b-c8c13a1c830d/tempest-tests-tempest-tests-runner/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555421 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" exitCode=0 Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555464 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93"} Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.555490 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"de12482b41e0e33cceec8470bd0aa79f9be6258755a8988e76fc4bad5d7ee95c"} Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.572253 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6051ab9a-5c43-4757-a1ff-3f199dee0a79/test-operator-logs-container/0.log" Jan 24 07:50:16 crc kubenswrapper[4675]: I0124 07:50:16.791889 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc_e9c128cc-910c-4ef2-9b56-14adf4d264b3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.390468 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.392805 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.440127 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520392 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520658 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.520756 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.569736 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.621970 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622065 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622084 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622465 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.622696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.648366 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"community-operators-r877k\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:17 crc kubenswrapper[4675]: I0124 07:50:17.719504 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.307521 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.581598 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" exitCode=0 Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.581953 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590636 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" exitCode=0 Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590677 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8"} Jan 24 07:50:18 crc kubenswrapper[4675]: I0124 07:50:18.590701 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"e2a05fb6cdb9cadfc028e071a632b920a8f3f839f100de17bd3aba7d97eb8e93"} Jan 24 07:50:19 crc kubenswrapper[4675]: I0124 07:50:19.601451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} Jan 24 07:50:19 crc kubenswrapper[4675]: I0124 07:50:19.605959 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerStarted","Data":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.621020 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" exitCode=0 Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.621466 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} Jan 24 07:50:21 crc kubenswrapper[4675]: I0124 07:50:21.642293 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wsg5j" podStartSLOduration=5.150707552 podStartE2EDuration="7.642278235s" podCreationTimestamp="2026-01-24 07:50:14 +0000 UTC" firstStartedPulling="2026-01-24 07:50:16.557207557 +0000 UTC m=+3417.853312780" lastFinishedPulling="2026-01-24 07:50:19.04877824 +0000 UTC m=+3420.344883463" observedRunningTime="2026-01-24 07:50:19.660138328 +0000 UTC m=+3420.956243551" watchObservedRunningTime="2026-01-24 07:50:21.642278235 +0000 UTC m=+3422.938383458" Jan 24 07:50:22 crc kubenswrapper[4675]: I0124 07:50:22.640848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerStarted","Data":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} Jan 24 07:50:22 crc kubenswrapper[4675]: I0124 07:50:22.665746 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r877k" podStartSLOduration=2.072120666 podStartE2EDuration="5.665732235s" podCreationTimestamp="2026-01-24 07:50:17 +0000 UTC" firstStartedPulling="2026-01-24 07:50:18.592238679 +0000 UTC m=+3419.888343902" lastFinishedPulling="2026-01-24 07:50:22.185850248 +0000 UTC m=+3423.481955471" observedRunningTime="2026-01-24 07:50:22.66306319 +0000 UTC m=+3423.959168413" watchObservedRunningTime="2026-01-24 07:50:22.665732235 +0000 UTC m=+3423.961837458" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.146995 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.147274 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.221980 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:25 crc kubenswrapper[4675]: I0124 07:50:25.727897 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:26 crc kubenswrapper[4675]: I0124 07:50:26.390179 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.712549 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wsg5j" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" containerID="cri-o://c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" gracePeriod=2 Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.720207 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.721394 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:27 crc kubenswrapper[4675]: I0124 07:50:27.787574 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.199833 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.251791 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.251981 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.252061 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") pod \"283b234b-97b3-4128-b75e-c07e0dd22cd8\" (UID: \"283b234b-97b3-4128-b75e-c07e0dd22cd8\") " Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.253309 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities" (OuterVolumeSpecName: "utilities") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.275059 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk" (OuterVolumeSpecName: "kube-api-access-fpsdk") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "kube-api-access-fpsdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.281095 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "283b234b-97b3-4128-b75e-c07e0dd22cd8" (UID: "283b234b-97b3-4128-b75e-c07e0dd22cd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.353965 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpsdk\" (UniqueName: \"kubernetes.io/projected/283b234b-97b3-4128-b75e-c07e0dd22cd8-kube-api-access-fpsdk\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.354003 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.354015 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/283b234b-97b3-4128-b75e-c07e0dd22cd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.721164 4675 generic.go:334] "Generic (PLEG): container finished" podID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" exitCode=0 Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.721903 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsg5j" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722126 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722159 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsg5j" event={"ID":"283b234b-97b3-4128-b75e-c07e0dd22cd8","Type":"ContainerDied","Data":"de12482b41e0e33cceec8470bd0aa79f9be6258755a8988e76fc4bad5d7ee95c"} Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.722176 4675 scope.go:117] "RemoveContainer" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.752110 4675 scope.go:117] "RemoveContainer" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.788979 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.790152 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.795768 4675 scope.go:117] "RemoveContainer" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.809212 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsg5j"] Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.868532 4675 scope.go:117] "RemoveContainer" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869364 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": container with ID starting with c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f not found: ID does not exist" containerID="c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869396 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f"} err="failed to get container status \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": rpc error: code = NotFound desc = could not find container \"c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f\": container with ID starting with c5fc5aded85cf9ccc8385535ad0a7b59421d4015a2e9c46591276480f2564f5f not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869416 4675 scope.go:117] "RemoveContainer" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869657 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": container with ID starting with ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76 not found: ID does not exist" containerID="ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869675 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76"} err="failed to get container status \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": rpc error: code = NotFound desc = could not find container \"ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76\": container with ID starting with ec7e75e5ca7d735351d4e6ca2ee17d63ac10e605a075bf91bdc1d4aaa62e7a76 not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.869689 4675 scope.go:117] "RemoveContainer" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: E0124 07:50:28.869950 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": container with ID starting with 380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93 not found: ID does not exist" containerID="380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.870008 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93"} err="failed to get container status \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": rpc error: code = NotFound desc = could not find container \"380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93\": container with ID starting with 380eeff39b329da2d460ec7c8356c46ae9820a0fc0c6e5ddbbdee383f3d3ae93 not found: ID does not exist" Jan 24 07:50:28 crc kubenswrapper[4675]: I0124 07:50:28.954318 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" path="/var/lib/kubelet/pods/283b234b-97b3-4128-b75e-c07e0dd22cd8/volumes" Jan 24 07:50:30 crc kubenswrapper[4675]: I0124 07:50:30.713903 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b2446e52-3d97-46f2-ac99-4bb1af82d302/memcached/0.log" Jan 24 07:50:31 crc kubenswrapper[4675]: I0124 07:50:31.185045 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:31 crc kubenswrapper[4675]: I0124 07:50:31.742070 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r877k" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" containerID="cri-o://353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" gracePeriod=2 Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.230269 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317522 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317881 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.317909 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") pod \"b1396ef9-9c28-4198-a055-b132c7205bff\" (UID: \"b1396ef9-9c28-4198-a055-b132c7205bff\") " Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.319592 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities" (OuterVolumeSpecName: "utilities") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.346066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct" (OuterVolumeSpecName: "kube-api-access-lk6ct") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "kube-api-access-lk6ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.379156 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1396ef9-9c28-4198-a055-b132c7205bff" (UID: "b1396ef9-9c28-4198-a055-b132c7205bff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419900 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk6ct\" (UniqueName: \"kubernetes.io/projected/b1396ef9-9c28-4198-a055-b132c7205bff-kube-api-access-lk6ct\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419947 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.419958 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1396ef9-9c28-4198-a055-b132c7205bff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751768 4675 generic.go:334] "Generic (PLEG): container finished" podID="b1396ef9-9c28-4198-a055-b132c7205bff" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" exitCode=0 Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751809 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r877k" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751819 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751868 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r877k" event={"ID":"b1396ef9-9c28-4198-a055-b132c7205bff","Type":"ContainerDied","Data":"e2a05fb6cdb9cadfc028e071a632b920a8f3f839f100de17bd3aba7d97eb8e93"} Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.751888 4675 scope.go:117] "RemoveContainer" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.773331 4675 scope.go:117] "RemoveContainer" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.786169 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.809577 4675 scope.go:117] "RemoveContainer" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.818175 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r877k"] Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.849636 4675 scope.go:117] "RemoveContainer" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864102 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": container with ID starting with 353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22 not found: ID does not exist" containerID="353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864144 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22"} err="failed to get container status \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": rpc error: code = NotFound desc = could not find container \"353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22\": container with ID starting with 353dcb597cb7492a1aa1732c38aad82af6ce802a99b3ecd3108c69a8215ebb22 not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864180 4675 scope.go:117] "RemoveContainer" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864409 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": container with ID starting with 4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb not found: ID does not exist" containerID="4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864429 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb"} err="failed to get container status \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": rpc error: code = NotFound desc = could not find container \"4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb\": container with ID starting with 4d1c1a2bd1d6c1bc57f95f9fac713c2c318bcb33d645a1a66e7cd3feed7229eb not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864442 4675 scope.go:117] "RemoveContainer" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: E0124 07:50:32.864603 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": container with ID starting with 9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8 not found: ID does not exist" containerID="9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.864621 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8"} err="failed to get container status \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": rpc error: code = NotFound desc = could not find container \"9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8\": container with ID starting with 9d801b476dd167e54808874cc88d90f8b5fc8a7b4a8b9734c081fe4dc5640cc8 not found: ID does not exist" Jan 24 07:50:32 crc kubenswrapper[4675]: I0124 07:50:32.951743 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" path="/var/lib/kubelet/pods/b1396ef9-9c28-4198-a055-b132c7205bff/volumes" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644354 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644882 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.644924 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.645958 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.646027 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" gracePeriod=600 Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.805681 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" exitCode=0 Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.805782 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9"} Jan 24 07:50:38 crc kubenswrapper[4675]: I0124 07:50:38.806051 4675 scope.go:117] "RemoveContainer" containerID="dae7e51cff9dbd84b0f41b829105184254804ef56da93819ec0964e7f6f633bd" Jan 24 07:50:39 crc kubenswrapper[4675]: I0124 07:50:39.816581 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} Jan 24 07:50:49 crc kubenswrapper[4675]: I0124 07:50:49.788831 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-dwbq6_2db25911-f36e-43ae-8f47-b042ec82266e/manager/0.log" Jan 24 07:50:49 crc kubenswrapper[4675]: I0124 07:50:49.975934 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.167798 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.201883 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.238691 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.448896 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.475271 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.537749 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/extract/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.791743 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-6jbwg_b8285f65-9930-4bb9-9e18-b6ffe19f45fb/manager/0.log" Jan 24 07:50:50 crc kubenswrapper[4675]: I0124 07:50:50.812210 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-79fwx_6003a1f9-ad0e-49f6-8750-6ac2208560cc/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.019135 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-thqtz_e7263d16-14c3-4254-821a-cbf99b7cf3e4/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.166439 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqk98_7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.411040 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-67vkh_4aa5aa88-c6f2-4000-9a9d-3b14e23220de/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.661951 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-l7jq5_06f423e8-7ba9-497d-a587-cc880d66625b/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.726469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-c5658_743af71f-3542-439c-b3a1-33a7b9ae34f1/manager/0.log" Jan 24 07:50:51 crc kubenswrapper[4675]: I0124 07:50:51.894287 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bqd4q_5b3a45f7-a1eb-44a2-b0be-7c77b190d50c/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.028609 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6lq96_e09ce8a8-a2a4-4fec-b36d-a97910aced0f/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.152570 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-vjf84_7660e41e-527d-4806-8ef3-6dee25fa72c5/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.301405 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dzvlp_724ac56d-9f4e-40f9-98f7-3a65c807f89c/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.472597 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4lmvf_6f867475-7eee-431c-97ee-12ae861193c7/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.573965 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q6qn9_bdc167a3-9335-4b3d-9696-a1d03b9ae618/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.723446 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk_ac97fbc7-211e-41e3-8e16-aff853a7c9f4/manager/0.log" Jan 24 07:50:52 crc kubenswrapper[4675]: I0124 07:50:52.932866 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d498c57f9-4vbdv_fc267189-e8ca-412c-bb9a-6b251571a514/operator/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.246893 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d4hsh_954076ba-3e6f-4e5b-9b3f-4637840d5021/registry-server/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.601026 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-n4kll_a1041f21-5d7d-4b17-84ff-ee83332e604d/manager/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.792623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-l5hrz_20b0ee18-4569-4428-956f-d8795904f368/manager/0.log" Jan 24 07:50:53 crc kubenswrapper[4675]: I0124 07:50:53.962453 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9cmpf_b7d1f492-700c-492e-a1c2-eae496f0133c/operator/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.048498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-688fccdd58-dkxf7_d94b056e-c445-4033-8d02-a794dae4b671/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.248213 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d55b89685-9rvmf_4bfb9011-058d-494d-96ce-a39202c7b851/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.303174 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-n6jmw_47e89f8e-f652-43a1-a36a-2db184700f3e/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.517303 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-k7crk_fae349a1-6c08-4424-abe2-42dddccd55cc/manager/0.log" Jan 24 07:50:54 crc kubenswrapper[4675]: I0124 07:50:54.552006 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-9fkjr_f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480/manager/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.023493 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kdjm5_e08de50b-8092-4f29-b2a8-a391b4778142/control-plane-machine-set-operator/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.335154 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/kube-rbac-proxy/0.log" Jan 24 07:51:16 crc kubenswrapper[4675]: I0124 07:51:16.386980 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/machine-api-operator/0.log" Jan 24 07:51:30 crc kubenswrapper[4675]: I0124 07:51:30.965834 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gt7xw_f9d3eaae-49ca-400c-a277-bdbad7f8125a/cert-manager-controller/0.log" Jan 24 07:51:31 crc kubenswrapper[4675]: I0124 07:51:31.151023 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kp8k_99008be6-effb-4dc7-a761-ee291c03f093/cert-manager-cainjector/0.log" Jan 24 07:51:31 crc kubenswrapper[4675]: I0124 07:51:31.215504 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lthpk_261785a7-b436-4597-a36b-473d27769006/cert-manager-webhook/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.342111 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-szblh_b289d862-4851-4f88-9a5b-4bed8cd70bd8/nmstate-console-plugin/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.588084 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ljst6_8c82b668-f857-4de6-a938-333a7e44591f/nmstate-handler/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.647063 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/kube-rbac-proxy/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.676963 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/nmstate-metrics/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.877790 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dm24p_b344cabd-3dd6-4691-990b-045aaf4c622f/nmstate-operator/0.log" Jan 24 07:51:44 crc kubenswrapper[4675]: I0124 07:51:44.924431 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-77dfm_469eb31f-c261-4d7f-8a12-c10ed969bd55/nmstate-webhook/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.559019 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/controller/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.616972 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/kube-rbac-proxy/0.log" Jan 24 07:52:15 crc kubenswrapper[4675]: I0124 07:52:15.925865 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.206041 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.210260 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.241729 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.264136 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.431861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.477588 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.516397 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.539453 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.717592 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.739155 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.740298 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.821768 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/controller/0.log" Jan 24 07:52:16 crc kubenswrapper[4675]: I0124 07:52:16.967013 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr-metrics/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.142450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.217920 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/reloader/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.244673 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy-frr/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.521164 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-skd24_032ac1eb-bb7f-4f94-b9ad-4d710032f3af/frr-k8s-webhook-server/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.730117 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d867674d-x4v6v_0cf0ee32-c416-4629-a441-268fbe054062/manager/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.873093 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr/0.log" Jan 24 07:52:17 crc kubenswrapper[4675]: I0124 07:52:17.889980 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f499b46f-tntmc_893cbc8e-86ae-4910-8693-061301da0ba6/webhook-server/0.log" Jan 24 07:52:18 crc kubenswrapper[4675]: I0124 07:52:18.180114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/kube-rbac-proxy/0.log" Jan 24 07:52:18 crc kubenswrapper[4675]: I0124 07:52:18.391527 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/speaker/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.445852 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.627048 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.665029 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.680131 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:33 crc kubenswrapper[4675]: I0124 07:52:33.996398 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.023132 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/extract/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.027626 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.221412 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.488398 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.493010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.540619 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.786962 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.790426 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 07:52:34 crc kubenswrapper[4675]: I0124 07:52:34.881429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/extract/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.004900 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.149292 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.207981 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.238939 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.395344 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.412424 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.682632 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.805618 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/registry-server/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.916498 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.922912 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:35 crc kubenswrapper[4675]: I0124 07:52:35.969435 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.256447 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.330522 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.598206 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9cx7r_83c80cb7-74c3-417a-8d8e-54cdcf640b5b/marketplace-operator/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.890809 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/registry-server/0.log" Jan 24 07:52:36 crc kubenswrapper[4675]: I0124 07:52:36.953126 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.069132 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.114074 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.119655 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.373780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.427751 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/registry-server/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.440752 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.582917 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.813450 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.826564 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.875346 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.976212 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 07:52:37 crc kubenswrapper[4675]: I0124 07:52:37.987786 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.355240 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/registry-server/0.log" Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.629901 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:52:38 crc kubenswrapper[4675]: I0124 07:52:38.629948 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:08 crc kubenswrapper[4675]: I0124 07:53:08.631072 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:08 crc kubenswrapper[4675]: I0124 07:53:08.631687 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.630416 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.631036 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.631106 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.632222 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 07:53:38 crc kubenswrapper[4675]: I0124 07:53:38.632301 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" gracePeriod=600 Jan 24 07:53:38 crc kubenswrapper[4675]: E0124 07:53:38.767615 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516235 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" exitCode=0 Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516453 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f"} Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.516489 4675 scope.go:117] "RemoveContainer" containerID="0a6a61085acc256df533325dcd06e983d7d8d92776e231385eb1bb2bd81838c9" Jan 24 07:53:39 crc kubenswrapper[4675]: I0124 07:53:39.517122 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:53:39 crc kubenswrapper[4675]: E0124 07:53:39.517473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.078552 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079562 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079579 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079600 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079609 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079622 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079631 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-utilities" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079649 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079659 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079679 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079688 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="extract-content" Jan 24 07:53:51 crc kubenswrapper[4675]: E0124 07:53:51.079708 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079734 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.079989 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="283b234b-97b3-4128-b75e-c07e0dd22cd8" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.080007 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1396ef9-9c28-4198-a055-b132c7205bff" containerName="registry-server" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.081744 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.107423 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277421 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277518 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.277570 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379269 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379357 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379407 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379932 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.379978 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.422624 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"redhat-operators-j8qd6\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:51 crc kubenswrapper[4675]: I0124 07:53:51.712917 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.216331 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656113 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" exitCode=0 Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656533 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07"} Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.656587 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"b8ee1a0a1cf2fb484747062f4702e1804aef2c397b833f4b7f7333428f156e56"} Jan 24 07:53:52 crc kubenswrapper[4675]: I0124 07:53:52.662564 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 07:53:53 crc kubenswrapper[4675]: I0124 07:53:53.672666 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} Jan 24 07:53:53 crc kubenswrapper[4675]: I0124 07:53:53.942861 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:53:53 crc kubenswrapper[4675]: E0124 07:53:53.943403 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:53:57 crc kubenswrapper[4675]: I0124 07:53:57.726048 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" exitCode=0 Jan 24 07:53:57 crc kubenswrapper[4675]: I0124 07:53:57.726153 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} Jan 24 07:53:59 crc kubenswrapper[4675]: I0124 07:53:59.757292 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerStarted","Data":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} Jan 24 07:54:01 crc kubenswrapper[4675]: I0124 07:54:01.713098 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:01 crc kubenswrapper[4675]: I0124 07:54:01.713410 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:02 crc kubenswrapper[4675]: I0124 07:54:02.774752 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8qd6" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" probeResult="failure" output=< Jan 24 07:54:02 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 07:54:02 crc kubenswrapper[4675]: > Jan 24 07:54:07 crc kubenswrapper[4675]: I0124 07:54:07.943347 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:07 crc kubenswrapper[4675]: E0124 07:54:07.945227 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.801145 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.827914 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8qd6" podStartSLOduration=14.791716404 podStartE2EDuration="20.82789395s" podCreationTimestamp="2026-01-24 07:53:51 +0000 UTC" firstStartedPulling="2026-01-24 07:53:52.659207418 +0000 UTC m=+3633.955312641" lastFinishedPulling="2026-01-24 07:53:58.695384954 +0000 UTC m=+3639.991490187" observedRunningTime="2026-01-24 07:53:59.788733237 +0000 UTC m=+3641.084838470" watchObservedRunningTime="2026-01-24 07:54:11.82789395 +0000 UTC m=+3653.123999183" Jan 24 07:54:11 crc kubenswrapper[4675]: I0124 07:54:11.880646 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:12 crc kubenswrapper[4675]: I0124 07:54:12.049211 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:12 crc kubenswrapper[4675]: I0124 07:54:12.876607 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j8qd6" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" containerID="cri-o://5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" gracePeriod=2 Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.339348 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.359196 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.359321 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.360413 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities" (OuterVolumeSpecName: "utilities") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.360582 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") pod \"6e6a9d98-2990-4c73-acc3-48d6623eb351\" (UID: \"6e6a9d98-2990-4c73-acc3-48d6623eb351\") " Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.362585 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.375897 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7" (OuterVolumeSpecName: "kube-api-access-9jqb7") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "kube-api-access-9jqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.464537 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jqb7\" (UniqueName: \"kubernetes.io/projected/6e6a9d98-2990-4c73-acc3-48d6623eb351-kube-api-access-9jqb7\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.482109 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e6a9d98-2990-4c73-acc3-48d6623eb351" (UID: "6e6a9d98-2990-4c73-acc3-48d6623eb351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.566415 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6a9d98-2990-4c73-acc3-48d6623eb351-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.890973 4675 generic.go:334] "Generic (PLEG): container finished" podID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" exitCode=0 Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891060 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8qd6" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891075 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891537 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8qd6" event={"ID":"6e6a9d98-2990-4c73-acc3-48d6623eb351","Type":"ContainerDied","Data":"b8ee1a0a1cf2fb484747062f4702e1804aef2c397b833f4b7f7333428f156e56"} Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.891602 4675 scope.go:117] "RemoveContainer" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.916773 4675 scope.go:117] "RemoveContainer" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.936707 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.947041 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j8qd6"] Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.948397 4675 scope.go:117] "RemoveContainer" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997053 4675 scope.go:117] "RemoveContainer" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: E0124 07:54:13.997507 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": container with ID starting with 5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09 not found: ID does not exist" containerID="5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997547 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09"} err="failed to get container status \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": rpc error: code = NotFound desc = could not find container \"5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09\": container with ID starting with 5f131c1675c98af6f51a998f7e1d65dcdd642c1be6d49edbcd6af1cd42ab8a09 not found: ID does not exist" Jan 24 07:54:13 crc kubenswrapper[4675]: I0124 07:54:13.997570 4675 scope.go:117] "RemoveContainer" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:14 crc kubenswrapper[4675]: E0124 07:54:14.000807 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": container with ID starting with b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca not found: ID does not exist" containerID="b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.000840 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca"} err="failed to get container status \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": rpc error: code = NotFound desc = could not find container \"b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca\": container with ID starting with b5c64b248fff9dcf32f2ee8572ebcf45f781509b35ff7d4a4efbab25775ac6ca not found: ID does not exist" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.000863 4675 scope.go:117] "RemoveContainer" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:14 crc kubenswrapper[4675]: E0124 07:54:14.001416 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": container with ID starting with 955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07 not found: ID does not exist" containerID="955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.001549 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07"} err="failed to get container status \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": rpc error: code = NotFound desc = could not find container \"955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07\": container with ID starting with 955f5d63f2b523c054cdd48d6e536130aa9e14b4057717c7dc17e299dcbf7f07 not found: ID does not exist" Jan 24 07:54:14 crc kubenswrapper[4675]: I0124 07:54:14.960911 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" path="/var/lib/kubelet/pods/6e6a9d98-2990-4c73-acc3-48d6623eb351/volumes" Jan 24 07:54:19 crc kubenswrapper[4675]: I0124 07:54:19.943539 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:19 crc kubenswrapper[4675]: E0124 07:54:19.946258 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.056756 4675 generic.go:334] "Generic (PLEG): container finished" podID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" exitCode=0 Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.056856 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" event={"ID":"179bf7a3-0095-4057-b946-ac2ee02c99ef","Type":"ContainerDied","Data":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.058166 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:29 crc kubenswrapper[4675]: I0124 07:54:29.888469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/gather/0.log" Jan 24 07:54:34 crc kubenswrapper[4675]: I0124 07:54:34.943393 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:34 crc kubenswrapper[4675]: E0124 07:54:34.944609 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.607123 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608043 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608098 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608130 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-content" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608138 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-content" Jan 24 07:54:35 crc kubenswrapper[4675]: E0124 07:54:35.608162 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-utilities" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608175 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="extract-utilities" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.608407 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6a9d98-2990-4c73-acc3-48d6623eb351" containerName="registry-server" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.610056 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.626080 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678004 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678059 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.678128 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780245 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780306 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.780382 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.781032 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.781675 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.807541 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"certified-operators-6xnzj\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:35 crc kubenswrapper[4675]: I0124 07:54:35.987167 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:36 crc kubenswrapper[4675]: I0124 07:54:36.510914 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181171 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683" exitCode=0 Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181219 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683"} Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.181247 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"977f817e22bdb9bece05fa5fd4571fe1cfbc9c00d3d63bd4a3396e8eb4d8f179"} Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.460011 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.460751 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" containerID="cri-o://846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" gracePeriod=2 Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.477013 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rk4k5/must-gather-w5wnw"] Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.915827 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/copy/0.log" Jan 24 07:54:37 crc kubenswrapper[4675]: I0124 07:54:37.916567 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.022016 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") pod \"179bf7a3-0095-4057-b946-ac2ee02c99ef\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.022066 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") pod \"179bf7a3-0095-4057-b946-ac2ee02c99ef\" (UID: \"179bf7a3-0095-4057-b946-ac2ee02c99ef\") " Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.030894 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4" (OuterVolumeSpecName: "kube-api-access-w4db4") pod "179bf7a3-0095-4057-b946-ac2ee02c99ef" (UID: "179bf7a3-0095-4057-b946-ac2ee02c99ef"). InnerVolumeSpecName "kube-api-access-w4db4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.125690 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4db4\" (UniqueName: \"kubernetes.io/projected/179bf7a3-0095-4057-b946-ac2ee02c99ef-kube-api-access-w4db4\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.173786 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "179bf7a3-0095-4057-b946-ac2ee02c99ef" (UID: "179bf7a3-0095-4057-b946-ac2ee02c99ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194057 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rk4k5_must-gather-w5wnw_179bf7a3-0095-4057-b946-ac2ee02c99ef/copy/0.log" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194625 4675 generic.go:334] "Generic (PLEG): container finished" podID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" exitCode=143 Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194703 4675 scope.go:117] "RemoveContainer" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.194698 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rk4k5/must-gather-w5wnw" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.199168 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23"} Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.235418 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/179bf7a3-0095-4057-b946-ac2ee02c99ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.241218 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305255 4675 scope.go:117] "RemoveContainer" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: E0124 07:54:38.305943 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": container with ID starting with 846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2 not found: ID does not exist" containerID="846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305978 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2"} err="failed to get container status \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": rpc error: code = NotFound desc = could not find container \"846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2\": container with ID starting with 846ddf0095b3ec83151708ad7f440eeac3b9eb8b8ca98327dadc71407d0b35d2 not found: ID does not exist" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.305998 4675 scope.go:117] "RemoveContainer" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: E0124 07:54:38.306249 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": container with ID starting with 50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588 not found: ID does not exist" containerID="50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.306291 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588"} err="failed to get container status \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": rpc error: code = NotFound desc = could not find container \"50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588\": container with ID starting with 50c777c0a28900b70f7c60fa1b8ebcaa87f699ce24eaddfb9f6aa4a37e986588 not found: ID does not exist" Jan 24 07:54:38 crc kubenswrapper[4675]: I0124 07:54:38.952350 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" path="/var/lib/kubelet/pods/179bf7a3-0095-4057-b946-ac2ee02c99ef/volumes" Jan 24 07:54:39 crc kubenswrapper[4675]: I0124 07:54:39.209474 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23" exitCode=0 Jan 24 07:54:39 crc kubenswrapper[4675]: I0124 07:54:39.209573 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23"} Jan 24 07:54:40 crc kubenswrapper[4675]: I0124 07:54:40.226512 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerStarted","Data":"66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58"} Jan 24 07:54:40 crc kubenswrapper[4675]: I0124 07:54:40.251493 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xnzj" podStartSLOduration=2.798717325 podStartE2EDuration="5.251478723s" podCreationTimestamp="2026-01-24 07:54:35 +0000 UTC" firstStartedPulling="2026-01-24 07:54:37.184506963 +0000 UTC m=+3678.480612196" lastFinishedPulling="2026-01-24 07:54:39.637268361 +0000 UTC m=+3680.933373594" observedRunningTime="2026-01-24 07:54:40.241635133 +0000 UTC m=+3681.537740356" watchObservedRunningTime="2026-01-24 07:54:40.251478723 +0000 UTC m=+3681.547583946" Jan 24 07:54:45 crc kubenswrapper[4675]: I0124 07:54:45.987990 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:45 crc kubenswrapper[4675]: I0124 07:54:45.988819 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.071105 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.352228 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:46 crc kubenswrapper[4675]: I0124 07:54:46.416766 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:48 crc kubenswrapper[4675]: I0124 07:54:48.310242 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xnzj" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" containerID="cri-o://66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" gracePeriod=2 Jan 24 07:54:48 crc kubenswrapper[4675]: I0124 07:54:48.948083 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:54:48 crc kubenswrapper[4675]: E0124 07:54:48.948822 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.322995 4675 generic.go:334] "Generic (PLEG): container finished" podID="34e0a03b-be29-4b7b-b0d8-773804190356" containerID="66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" exitCode=0 Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.323049 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58"} Jan 24 07:54:49 crc kubenswrapper[4675]: I0124 07:54:49.947411 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102339 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102591 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.102669 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") pod \"34e0a03b-be29-4b7b-b0d8-773804190356\" (UID: \"34e0a03b-be29-4b7b-b0d8-773804190356\") " Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.104318 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities" (OuterVolumeSpecName: "utilities") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.109071 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph" (OuterVolumeSpecName: "kube-api-access-7lbph") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "kube-api-access-7lbph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.154714 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34e0a03b-be29-4b7b-b0d8-773804190356" (UID: "34e0a03b-be29-4b7b-b0d8-773804190356"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205889 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205933 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e0a03b-be29-4b7b-b0d8-773804190356-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.205945 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lbph\" (UniqueName: \"kubernetes.io/projected/34e0a03b-be29-4b7b-b0d8-773804190356-kube-api-access-7lbph\") on node \"crc\" DevicePath \"\"" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.334468 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xnzj" event={"ID":"34e0a03b-be29-4b7b-b0d8-773804190356","Type":"ContainerDied","Data":"977f817e22bdb9bece05fa5fd4571fe1cfbc9c00d3d63bd4a3396e8eb4d8f179"} Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.334532 4675 scope.go:117] "RemoveContainer" containerID="66a439ef77e64c693b436c8b316aed3dd27756e52377ab4994882d7e00101e58" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.335937 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xnzj" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.361052 4675 scope.go:117] "RemoveContainer" containerID="064af4fe3768745387901f6ec0b6d2a528059b0fa4dd8ab153ae5b699ca3ee23" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.398121 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.399298 4675 scope.go:117] "RemoveContainer" containerID="93f0d9f4bb35da6e1fd2b25bbd519bb50099fc2619ced465d2acbc19ff05a683" Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.415821 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xnzj"] Jan 24 07:54:50 crc kubenswrapper[4675]: I0124 07:54:50.966543 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" path="/var/lib/kubelet/pods/34e0a03b-be29-4b7b-b0d8-773804190356/volumes" Jan 24 07:55:02 crc kubenswrapper[4675]: I0124 07:55:02.942648 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:02 crc kubenswrapper[4675]: E0124 07:55:02.945083 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:14 crc kubenswrapper[4675]: I0124 07:55:14.942832 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:14 crc kubenswrapper[4675]: E0124 07:55:14.943847 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:25 crc kubenswrapper[4675]: I0124 07:55:25.943110 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:25 crc kubenswrapper[4675]: E0124 07:55:25.945474 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:36 crc kubenswrapper[4675]: I0124 07:55:36.942465 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:36 crc kubenswrapper[4675]: E0124 07:55:36.943380 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:55:42 crc kubenswrapper[4675]: I0124 07:55:42.364245 4675 scope.go:117] "RemoveContainer" containerID="c858779e5ecc8a74de00c097b5497a5ef52c52bc32dec8633fa758d97f5458dd" Jan 24 07:55:48 crc kubenswrapper[4675]: I0124 07:55:48.949677 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:55:48 crc kubenswrapper[4675]: E0124 07:55:48.950473 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:00 crc kubenswrapper[4675]: I0124 07:56:00.944331 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:00 crc kubenswrapper[4675]: E0124 07:56:00.945152 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:14 crc kubenswrapper[4675]: I0124 07:56:14.943779 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:14 crc kubenswrapper[4675]: E0124 07:56:14.946357 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:27 crc kubenswrapper[4675]: I0124 07:56:27.942332 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:27 crc kubenswrapper[4675]: E0124 07:56:27.943644 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:39 crc kubenswrapper[4675]: I0124 07:56:39.943432 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:39 crc kubenswrapper[4675]: E0124 07:56:39.944246 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:56:54 crc kubenswrapper[4675]: I0124 07:56:54.942927 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:56:54 crc kubenswrapper[4675]: E0124 07:56:54.944524 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:07 crc kubenswrapper[4675]: I0124 07:57:07.942613 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:07 crc kubenswrapper[4675]: E0124 07:57:07.943370 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:21 crc kubenswrapper[4675]: I0124 07:57:21.943525 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:21 crc kubenswrapper[4675]: E0124 07:57:21.944682 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194042 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194778 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194789 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194803 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-content" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194809 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-content" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194821 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-utilities" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194826 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="extract-utilities" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194843 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194848 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: E0124 07:57:29.194862 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.194867 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195057 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="gather" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195078 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e0a03b-be29-4b7b-b0d8-773804190356" containerName="registry-server" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.195088 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="179bf7a3-0095-4057-b946-ac2ee02c99ef" containerName="copy" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.196042 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.219678 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-958tl"/"default-dockercfg-zclxw" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.219588 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-958tl"/"openshift-service-ca.crt" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.220379 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-958tl"/"kube-root-ca.crt" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.223234 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.300068 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.300255 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402295 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402389 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.402956 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.426425 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"must-gather-64vd7\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.514144 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 07:57:29 crc kubenswrapper[4675]: I0124 07:57:29.980681 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.938405 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.938982 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.939000 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerStarted","Data":"96c651293ca2642071e0b3b14c3bf9d16ad024316ab3b9e09127e018f222008d"} Jan 24 07:57:30 crc kubenswrapper[4675]: I0124 07:57:30.960657 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-958tl/must-gather-64vd7" podStartSLOduration=1.96064096 podStartE2EDuration="1.96064096s" podCreationTimestamp="2026-01-24 07:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:57:30.954058291 +0000 UTC m=+3852.250163524" watchObservedRunningTime="2026-01-24 07:57:30.96064096 +0000 UTC m=+3852.256746193" Jan 24 07:57:32 crc kubenswrapper[4675]: I0124 07:57:32.943097 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:32 crc kubenswrapper[4675]: E0124 07:57:32.943691 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.344504 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.347066 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.402751 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.402809 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504586 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504634 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.504883 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.526576 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"crc-debug-6h9v2\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.666096 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.980122 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerStarted","Data":"24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9"} Jan 24 07:57:34 crc kubenswrapper[4675]: I0124 07:57:34.980406 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerStarted","Data":"8855ffa6c371d306ba09e8495be65eba6dff1385e9c0091905f8b04c7dfd483c"} Jan 24 07:57:43 crc kubenswrapper[4675]: I0124 07:57:43.942459 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:43 crc kubenswrapper[4675]: E0124 07:57:43.943133 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:57:56 crc kubenswrapper[4675]: I0124 07:57:56.943130 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:57:56 crc kubenswrapper[4675]: E0124 07:57:56.944014 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.247410 4675 generic.go:334] "Generic (PLEG): container finished" podID="03d5f217-42b1-4355-badd-256b5fa8709d" containerID="24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9" exitCode=0 Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.247483 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-6h9v2" event={"ID":"03d5f217-42b1-4355-badd-256b5fa8709d","Type":"ContainerDied","Data":"24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9"} Jan 24 07:58:08 crc kubenswrapper[4675]: I0124 07:58:08.952028 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:08 crc kubenswrapper[4675]: E0124 07:58:08.952413 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.383534 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.424645 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.436919 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-6h9v2"] Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.513411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") pod \"03d5f217-42b1-4355-badd-256b5fa8709d\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.513855 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") pod \"03d5f217-42b1-4355-badd-256b5fa8709d\" (UID: \"03d5f217-42b1-4355-badd-256b5fa8709d\") " Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.514066 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host" (OuterVolumeSpecName: "host") pod "03d5f217-42b1-4355-badd-256b5fa8709d" (UID: "03d5f217-42b1-4355-badd-256b5fa8709d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.515553 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03d5f217-42b1-4355-badd-256b5fa8709d-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.525087 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76" (OuterVolumeSpecName: "kube-api-access-6sq76") pod "03d5f217-42b1-4355-badd-256b5fa8709d" (UID: "03d5f217-42b1-4355-badd-256b5fa8709d"). InnerVolumeSpecName "kube-api-access-6sq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:09 crc kubenswrapper[4675]: I0124 07:58:09.617535 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sq76\" (UniqueName: \"kubernetes.io/projected/03d5f217-42b1-4355-badd-256b5fa8709d-kube-api-access-6sq76\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.266037 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8855ffa6c371d306ba09e8495be65eba6dff1385e9c0091905f8b04c7dfd483c" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.266096 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-6h9v2" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.689565 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:10 crc kubenswrapper[4675]: E0124 07:58:10.690149 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.690161 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.690328 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" containerName="container-00" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.691065 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.838939 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.839099 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940332 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940455 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.940603 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.952080 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d5f217-42b1-4355-badd-256b5fa8709d" path="/var/lib/kubelet/pods/03d5f217-42b1-4355-badd-256b5fa8709d/volumes" Jan 24 07:58:10 crc kubenswrapper[4675]: I0124 07:58:10.962358 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"crc-debug-cthbx\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.006107 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.276774 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerStarted","Data":"f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef"} Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.277114 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerStarted","Data":"6432092f2a679fd401f62b355afa6cd2e1a6614df1498db7e44271bbdc1b5fea"} Jan 24 07:58:11 crc kubenswrapper[4675]: I0124 07:58:11.294560 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-958tl/crc-debug-cthbx" podStartSLOduration=1.294543278 podStartE2EDuration="1.294543278s" podCreationTimestamp="2026-01-24 07:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 07:58:11.29211608 +0000 UTC m=+3892.588221313" watchObservedRunningTime="2026-01-24 07:58:11.294543278 +0000 UTC m=+3892.590648501" Jan 24 07:58:12 crc kubenswrapper[4675]: I0124 07:58:12.286474 4675 generic.go:334] "Generic (PLEG): container finished" podID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerID="f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef" exitCode=0 Jan 24 07:58:12 crc kubenswrapper[4675]: I0124 07:58:12.286521 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-cthbx" event={"ID":"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3","Type":"ContainerDied","Data":"f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef"} Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.396677 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.433531 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.442851 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-cthbx"] Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526226 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") pod \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526363 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host" (OuterVolumeSpecName: "host") pod "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" (UID: "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.526411 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") pod \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\" (UID: \"0c6e9f07-4c1a-4a2c-912d-0880df1c82f3\") " Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.527123 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.535780 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n" (OuterVolumeSpecName: "kube-api-access-bvt8n") pod "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" (UID: "0c6e9f07-4c1a-4a2c-912d-0880df1c82f3"). InnerVolumeSpecName "kube-api-access-bvt8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:13 crc kubenswrapper[4675]: I0124 07:58:13.628499 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvt8n\" (UniqueName: \"kubernetes.io/projected/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3-kube-api-access-bvt8n\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.302086 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6432092f2a679fd401f62b355afa6cd2e1a6614df1498db7e44271bbdc1b5fea" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.302149 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-cthbx" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.639251 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:14 crc kubenswrapper[4675]: E0124 07:58:14.639938 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.639972 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.640241 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" containerName="container-00" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.640814 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.746731 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.746933 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849352 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849435 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.849858 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.869652 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"crc-debug-p4l6v\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.955552 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6e9f07-4c1a-4a2c-912d-0880df1c82f3" path="/var/lib/kubelet/pods/0c6e9f07-4c1a-4a2c-912d-0880df1c82f3/volumes" Jan 24 07:58:14 crc kubenswrapper[4675]: I0124 07:58:14.956112 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:14 crc kubenswrapper[4675]: W0124 07:58:14.991592 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5404a634_005c_49e9_b722_a5d2c1c1c0eb.slice/crio-a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d WatchSource:0}: Error finding container a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d: Status 404 returned error can't find the container with id a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340039 4675 generic.go:334] "Generic (PLEG): container finished" podID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerID="e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa" exitCode=0 Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340210 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-p4l6v" event={"ID":"5404a634-005c-49e9-b722-a5d2c1c1c0eb","Type":"ContainerDied","Data":"e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa"} Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.340325 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/crc-debug-p4l6v" event={"ID":"5404a634-005c-49e9-b722-a5d2c1c1c0eb","Type":"ContainerStarted","Data":"a77ff9df7d8443d9eab7d1f75a0fd52f4343388da59f218dd43ef4223e7b0c1d"} Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.392566 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:15 crc kubenswrapper[4675]: I0124 07:58:15.401800 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/crc-debug-p4l6v"] Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.454119 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.586402 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") pod \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.587905 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") pod \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\" (UID: \"5404a634-005c-49e9-b722-a5d2c1c1c0eb\") " Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.588193 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host" (OuterVolumeSpecName: "host") pod "5404a634-005c-49e9-b722-a5d2c1c1c0eb" (UID: "5404a634-005c-49e9-b722-a5d2c1c1c0eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.588847 4675 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5404a634-005c-49e9-b722-a5d2c1c1c0eb-host\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.602009 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px" (OuterVolumeSpecName: "kube-api-access-qx4px") pod "5404a634-005c-49e9-b722-a5d2c1c1c0eb" (UID: "5404a634-005c-49e9-b722-a5d2c1c1c0eb"). InnerVolumeSpecName "kube-api-access-qx4px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.690469 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4px\" (UniqueName: \"kubernetes.io/projected/5404a634-005c-49e9-b722-a5d2c1c1c0eb-kube-api-access-qx4px\") on node \"crc\" DevicePath \"\"" Jan 24 07:58:16 crc kubenswrapper[4675]: I0124 07:58:16.952895 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" path="/var/lib/kubelet/pods/5404a634-005c-49e9-b722-a5d2c1c1c0eb/volumes" Jan 24 07:58:17 crc kubenswrapper[4675]: I0124 07:58:17.356985 4675 scope.go:117] "RemoveContainer" containerID="e930f56fcc5e1dfe4baa09ef2316036cdb2d6f0adc335e5d2db35065858833aa" Jan 24 07:58:17 crc kubenswrapper[4675]: I0124 07:58:17.357098 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/crc-debug-p4l6v" Jan 24 07:58:22 crc kubenswrapper[4675]: I0124 07:58:22.949144 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:22 crc kubenswrapper[4675]: E0124 07:58:22.949911 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:34 crc kubenswrapper[4675]: I0124 07:58:34.943356 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:34 crc kubenswrapper[4675]: E0124 07:58:34.944198 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 07:58:48 crc kubenswrapper[4675]: I0124 07:58:48.947851 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 07:58:49 crc kubenswrapper[4675]: I0124 07:58:49.689954 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.060010 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.266613 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79656b6bf8-nwng8_17e03478-4656-43f8-8d7b-5dfb1ff160a1/barbican-api-log/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.348693 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener/0.log" Jan 24 07:59:03 crc kubenswrapper[4675]: I0124 07:59:03.449124 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67c5df6588-xqvmq_3c5d104c-9f26-49fd-bec5-f62a53503d42/barbican-keystone-listener-log/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.105671 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.142872 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9646bdbd7-ww6xm_be4ebeb1-6268-4363-948f-8f9aa8f61fe9/barbican-worker-log/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.421023 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-central-agent/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.427187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2d4zw_e9b8f08b-6ece-4b46-86c0-9c353d61c50c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.517243 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/ceilometer-notification-agent/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.672066 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/sg-core/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.728854 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ed571c62-3ced-4952-a932-37a5a84da52f/proxy-httpd/0.log" Jan 24 07:59:04 crc kubenswrapper[4675]: I0124 07:59:04.890300 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.010282 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f870976e-13a5-4226-9eff-18a3244582e8/cinder-api-log/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.202624 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/cinder-scheduler/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.245000 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_31cacad0-4d32-4300-8bdc-bbf15fcd77ac/probe/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.464995 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-td879_bc52fac9-92d8-4555-b942-5f0dcb4bf6f3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.556860 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vjjjm_eefcce7b-4a2e-40bf-807a-a9db4b9a2a2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:05 crc kubenswrapper[4675]: I0124 07:59:05.835805 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.033739 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/init/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.433244 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-4qjxm_4a4ca579-5173-42d0-8dd8-d287df832c44/dnsmasq-dns/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.463475 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-49lhh_09d123a4-63c4-4269-b4e1-12932baedfd0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.709857 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-httpd/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.740049 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d0a8fdf4-03fc-4962-8792-6f129d2b00e4/glance-log/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.987683 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-httpd/0.log" Jan 24 07:59:06 crc kubenswrapper[4675]: I0124 07:59:06.991708 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d61eafc8-f960-4335-8d26-2d47e8c7c039/glance-log/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.298580 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/1.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.457601 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.646277 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mwfhh_2d09456f-a230-420b-b288-c0dc3e8a6e22/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.648017 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-656ff794dd-jx8ld_4b7e7730-0a42-48b0-bb7e-da95eb915126/horizon-log/0.log" Jan 24 07:59:07 crc kubenswrapper[4675]: I0124 07:59:07.890527 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vbvgv_27ad7637-701b-43e1-8440-0fd32522fc56/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.073106 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbffd67c8-k8gzb_405f0f26-61a4-4420-a147-43d7b86ebb8e/keystone-api/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.154036 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b742b344-80ea-48bf-bd28-8f1be00b4442/kube-state-metrics/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.306256 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-x6lrq_d457c71e-ef41-4bf9-a59b-b3221df26b41/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.756022 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-httpd/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.763478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77c5f475df-4zndh_4dd8da22-c828-48e1-bbab-d7360beb8d9f/neutron-api/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.838662 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b2446e52-3d97-46f2-ac99-4bb1af82d302/memcached/0.log" Jan 24 07:59:08 crc kubenswrapper[4675]: I0124 07:59:08.854406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hrg4g_388e10c7-15e4-40d5-94ed-5c6612f7fbfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.163836 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-log/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.262513 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a3a43606-cba1-4fca-93c4-a1937ee449cc/nova-cell0-conductor-conductor/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.455890 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_95a0d5c5-541f-4a43-9d20-22264dca21d1/nova-api-api/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.792941 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8afe3d83-5678-47e9-be7d-dfbf50fa5bc9/nova-cell1-conductor-conductor/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.896887 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k8fng_f4024f70-df50-442c-bcd5-c599d978277c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:09 crc kubenswrapper[4675]: I0124 07:59:09.932766 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a485ae65-6b4d-4cc6-9623-dc0b722f47e8/nova-cell1-novncproxy-novncproxy/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.118118 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-log/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.368116 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_361b5d16-2808-40ad-88a0-f07fd4c33e3e/nova-scheduler-scheduler/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.377610 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:59:10 crc kubenswrapper[4675]: I0124 07:59:10.936908 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d55e1385-c016-4bb9-afc2-a070f5a88241/nova-metadata-metadata/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.075037 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/galera/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.095764 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e189b411-9dd6-496f-a001-41bc90c3fe00/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.118508 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.361009 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/galera/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.375971 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_009254f3-9d76-4d89-8e35-d2b4c4be0da8/mysql-bootstrap/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.377465 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2dcea499-0c15-4a4c-802b-4ba8ccb4a9cb/openstackclient/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.574755 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2x2kb_b3f62f5e-ba65-4ce0-ac14-1f67ffc54ea1/ovn-controller/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.597386 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b7pft_1e0062ff-7e89-4c55-8796-de1c9e311dd2/openstack-network-exporter/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.678990 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.869989 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovs-vswitchd/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.966259 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server-init/0.log" Jan 24 07:59:11 crc kubenswrapper[4675]: I0124 07:59:11.998075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fsln2_feda0648-be0d-4fb4-a3a4-42440e47fec0/ovsdb-server/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.034267 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7vbln_3e407880-d27a-4aa2-bb81-a87bb20ffcf1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.152682 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/openstack-network-exporter/0.log" Jan 24 07:59:12 crc kubenswrapper[4675]: I0124 07:59:12.887741 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_daf62505-a3ad-4c12-a520-4d412d26a71c/ovn-northd/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.013779 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/openstack-network-exporter/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.044012 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19fa54da-8a94-427d-b8c6-0881657d3324/ovsdbserver-nb/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.151034 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/openstack-network-exporter/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.216279 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1d973fa-2671-49fe-82f1-1862aa70d784/ovsdbserver-sb/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.339275 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-api/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.380032 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5c48f89996-b4jz4_bf1f40fb-34b7-494b-bed1-b851a073ac8c/placement-log/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.506773 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.713482 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/rabbitmq/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.722975 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7c146e5e-4709-4401-a5eb-522609573260/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.736678 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:59:13 crc kubenswrapper[4675]: I0124 07:59:13.978082 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/rabbitmq/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.051429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rd4pw_7b1b0570-d3a2-4029-bcf8-f41144ea0f06/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.058586 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3fd85775-321f-4647-95b6-773ec82811e0/setup-container/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.228923 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zd8ln_55150857-7da2-4609-84be-9cbaa28141ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.289160 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rll8q_774fb762-6506-4e0c-9732-9208f7802057/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.376584 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ln8x2_3bc4008d-f8c6-4745-b524-d6136632cbfb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.553415 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wq6r9_191f15b0-8a3b-4dc4-bc49-9003c61619bf/ssh-known-hosts-edpm-deployment/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.680438 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-server/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.717228 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5875964765-b68mp_fa1443f8-8586-4757-9637-378c7c88787d/proxy-httpd/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.823005 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-sz46b_57da3a87-eeeb-47c8-b1bd-6a160dd81ff8/swift-ring-rebalance/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.934623 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-reaper/0.log" Jan 24 07:59:14 crc kubenswrapper[4675]: I0124 07:59:14.966847 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.085799 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.101942 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/account-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.181043 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.222858 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.249305 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.341944 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/container-updater/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.383096 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-auditor/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.386840 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-expirer/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.461433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-replicator/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.594021 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/rsync/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.628992 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-server/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.632518 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/object-updater/0.log" Jan 24 07:59:15 crc kubenswrapper[4675]: I0124 07:59:15.640455 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cf53054f-7616-43d6-9aeb-eb5f880b6e40/swift-recon-cron/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.063790 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e021dd7-397f-4546-a38b-c8c13a1c830d/tempest-tests-tempest-tests-runner/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.208009 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gwtsx_e47d7738-3361-429e-90f9-02dee4f0052e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.265469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6051ab9a-5c43-4757-a1ff-3f199dee0a79/test-operator-logs-container/0.log" Jan 24 07:59:16 crc kubenswrapper[4675]: I0124 07:59:16.455110 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8hzfc_e9c128cc-910c-4ef2-9b56-14adf4d264b3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.187712 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-dwbq6_2db25911-f36e-43ae-8f47-b042ec82266e/manager/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.733861 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.945528 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.967541 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:44 crc kubenswrapper[4675]: I0124 07:59:44.984077 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.153469 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/util/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.189168 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/pull/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.230093 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cfa0f3287ab0dd31d77b92c35b803cd783319b446af4b6b7b652d952526nxmk_ad9d9d8b-0730-4dc0-bd02-77a7db0b842d/extract/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.375888 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-6jbwg_b8285f65-9930-4bb9-9e18-b6ffe19f45fb/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.447827 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-79fwx_6003a1f9-ad0e-49f6-8750-6ac2208560cc/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.668745 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-thqtz_e7263d16-14c3-4254-821a-cbf99b7cf3e4/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.731838 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqk98_7ac3ad9e-a368-46c9-a5ec-d6dc7ca26320/manager/0.log" Jan 24 07:59:45 crc kubenswrapper[4675]: I0124 07:59:45.876850 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-67vkh_4aa5aa88-c6f2-4000-9a9d-3b14e23220de/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.104273 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-l7jq5_06f423e8-7ba9-497d-a587-cc880d66625b/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.173411 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-c5658_743af71f-3542-439c-b3a1-33a7b9ae34f1/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.329477 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bqd4q_5b3a45f7-a1eb-44a2-b0be-7c77b190d50c/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.355981 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6lq96_e09ce8a8-a2a4-4fec-b36d-a97910aced0f/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.553035 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-vjf84_7660e41e-527d-4806-8ef3-6dee25fa72c5/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.767789 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dzvlp_724ac56d-9f4e-40f9-98f7-3a65c807f89c/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.873013 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-4lmvf_6f867475-7eee-431c-97ee-12ae861193c7/manager/0.log" Jan 24 07:59:46 crc kubenswrapper[4675]: I0124 07:59:46.987819 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q6qn9_bdc167a3-9335-4b3d-9696-a1d03b9ae618/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.103917 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854h6jvk_ac97fbc7-211e-41e3-8e16-aff853a7c9f4/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.229638 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d498c57f9-4vbdv_fc267189-e8ca-412c-bb9a-6b251571a514/operator/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.551515 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d4hsh_954076ba-3e6f-4e5b-9b3f-4637840d5021/registry-server/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.809659 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-n4kll_a1041f21-5d7d-4b17-84ff-ee83332e604d/manager/0.log" Jan 24 07:59:47 crc kubenswrapper[4675]: I0124 07:59:47.924767 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-l5hrz_20b0ee18-4569-4428-956f-d8795904f368/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.130744 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9cmpf_b7d1f492-700c-492e-a1c2-eae496f0133c/operator/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.267329 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d55b89685-9rvmf_4bfb9011-058d-494d-96ce-a39202c7b851/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.394512 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-688fccdd58-dkxf7_d94b056e-c445-4033-8d02-a794dae4b671/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.507665 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-n6jmw_47e89f8e-f652-43a1-a36a-2db184700f3e/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.563770 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-k7crk_fae349a1-6c08-4424-abe2-42dddccd55cc/manager/0.log" Jan 24 07:59:48 crc kubenswrapper[4675]: I0124 07:59:48.644245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-9fkjr_f71dd82a-ffe5-4d6e-8bc9-6ec5dcd29480/manager/0.log" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.241639 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:00 crc kubenswrapper[4675]: E0124 08:00:00.243223 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.243252 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.243767 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5404a634-005c-49e9-b722-a5d2c1c1c0eb" containerName="container-00" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.245089 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.247863 4675 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.248921 4675 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.259345 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355783 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355862 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.355910 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.458217 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.459668 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.460565 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.459612 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.466696 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.486510 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"collect-profiles-29487360-rrs8g\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:00 crc kubenswrapper[4675]: I0124 08:00:00.583225 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.060591 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g"] Jan 24 08:00:01 crc kubenswrapper[4675]: W0124 08:00:01.076592 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eab032c_1f35_4afc_9150_acfc2580e18c.slice/crio-9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4 WatchSource:0}: Error finding container 9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4: Status 404 returned error can't find the container with id 9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4 Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.393848 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerStarted","Data":"8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126"} Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.394218 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerStarted","Data":"9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4"} Jan 24 08:00:01 crc kubenswrapper[4675]: I0124 08:00:01.422902 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" podStartSLOduration=1.422882209 podStartE2EDuration="1.422882209s" podCreationTimestamp="2026-01-24 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:00:01.418920462 +0000 UTC m=+4002.715025725" watchObservedRunningTime="2026-01-24 08:00:01.422882209 +0000 UTC m=+4002.718987442" Jan 24 08:00:02 crc kubenswrapper[4675]: I0124 08:00:02.404084 4675 generic.go:334] "Generic (PLEG): container finished" podID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerID="8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126" exitCode=0 Jan 24 08:00:02 crc kubenswrapper[4675]: I0124 08:00:02.404223 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerDied","Data":"8569c3793eb334184607d23f5aab8aef74560951d8618d003b17b886b6ca6126"} Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.784687 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.951807 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.951958 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.952015 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") pod \"8eab032c-1f35-4afc-9150-acfc2580e18c\" (UID: \"8eab032c-1f35-4afc-9150-acfc2580e18c\") " Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.952619 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.958001 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7" (OuterVolumeSpecName: "kube-api-access-ktqw7") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "kube-api-access-ktqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:00:03 crc kubenswrapper[4675]: I0124 08:00:03.960035 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8eab032c-1f35-4afc-9150-acfc2580e18c" (UID: "8eab032c-1f35-4afc-9150-acfc2580e18c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053940 4675 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eab032c-1f35-4afc-9150-acfc2580e18c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053965 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktqw7\" (UniqueName: \"kubernetes.io/projected/8eab032c-1f35-4afc-9150-acfc2580e18c-kube-api-access-ktqw7\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.053975 4675 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8eab032c-1f35-4afc-9150-acfc2580e18c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424567 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" event={"ID":"8eab032c-1f35-4afc-9150-acfc2580e18c","Type":"ContainerDied","Data":"9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4"} Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424928 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9698a28ed3ee898c418f1b2472ef6ed29685f6c54ca4cf70d8c46377f273cdc4" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.424623 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29487360-rrs8g" Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.894943 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.919549 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29487315-nmtzz"] Jan 24 08:00:04 crc kubenswrapper[4675]: I0124 08:00:04.960223 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992bc9f8-4adf-4940-95d5-942895a4d935" path="/var/lib/kubelet/pods/992bc9f8-4adf-4940-95d5-942895a4d935/volumes" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.333395 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kdjm5_e08de50b-8092-4f29-b2a8-a391b4778142/control-plane-machine-set-operator/0.log" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.532709 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/kube-rbac-proxy/0.log" Jan 24 08:00:11 crc kubenswrapper[4675]: I0124 08:00:11.579251 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-577lm_bba258ca-d05a-417e-8a91-73e603062c20/machine-api-operator/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.003851 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gt7xw_f9d3eaae-49ca-400c-a277-bdbad7f8125a/cert-manager-controller/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.189257 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kp8k_99008be6-effb-4dc7-a761-ee291c03f093/cert-manager-cainjector/0.log" Jan 24 08:00:26 crc kubenswrapper[4675]: I0124 08:00:26.269683 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-lthpk_261785a7-b436-4597-a36b-473d27769006/cert-manager-webhook/0.log" Jan 24 08:00:41 crc kubenswrapper[4675]: I0124 08:00:41.727614 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-szblh_b289d862-4851-4f88-9a5b-4bed8cd70bd8/nmstate-console-plugin/0.log" Jan 24 08:00:41 crc kubenswrapper[4675]: I0124 08:00:41.879564 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ljst6_8c82b668-f857-4de6-a938-333a7e44591f/nmstate-handler/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.011936 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/nmstate-metrics/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.014240 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-c56d8_56a6d660-7a53-4b25-b4e4-3d3f97a67430/kube-rbac-proxy/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.193378 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dm24p_b344cabd-3dd6-4691-990b-045aaf4c622f/nmstate-operator/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.256429 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-77dfm_469eb31f-c261-4d7f-8a12-c10ed969bd55/nmstate-webhook/0.log" Jan 24 08:00:42 crc kubenswrapper[4675]: I0124 08:00:42.558925 4675 scope.go:117] "RemoveContainer" containerID="4ceca7bb4c3f8f330a726083a805861d2285d706134fb31908c2ce567855cf82" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.798382 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:00:59 crc kubenswrapper[4675]: E0124 08:00:59.799304 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.799318 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.799540 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eab032c-1f35-4afc-9150-acfc2580e18c" containerName="collect-profiles" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.800831 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.814539 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895626 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895712 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.895852 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997429 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997538 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.997626 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.998028 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:00:59 crc kubenswrapper[4675]: I0124 08:00:59.998330 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.039472 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"community-operators-2t77m\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.120001 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.156659 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.163029 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.164741 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311235 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311316 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311341 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.311410 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412748 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412863 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412932 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.412986 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.420151 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.420558 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.421209 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.440536 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"keystone-cron-29487361-fcfvr\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:00 crc kubenswrapper[4675]: I0124 08:01:00.551473 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:00.988079 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.193588 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29487361-fcfvr"] Jan 24 08:01:01 crc kubenswrapper[4675]: W0124 08:01:01.216418 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5ea98d0_9373_4962_ba8a_a79643b7fdf3.slice/crio-6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7 WatchSource:0}: Error finding container 6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7: Status 404 returned error can't find the container with id 6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7 Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.907762 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerStarted","Data":"d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.908012 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerStarted","Data":"6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909613 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" exitCode=0 Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.909670 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"2013a713f73b80d7b45183fa61f7e77296379195089f4466ec9588945f1c9cfc"} Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.911138 4675 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 24 08:01:01 crc kubenswrapper[4675]: I0124 08:01:01.929056 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29487361-fcfvr" podStartSLOduration=1.9290433519999999 podStartE2EDuration="1.929043352s" podCreationTimestamp="2026-01-24 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 08:01:01.925813234 +0000 UTC m=+4063.221918447" watchObservedRunningTime="2026-01-24 08:01:01.929043352 +0000 UTC m=+4063.225148575" Jan 24 08:01:02 crc kubenswrapper[4675]: I0124 08:01:02.919119 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} Jan 24 08:01:03 crc kubenswrapper[4675]: I0124 08:01:03.928772 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" exitCode=0 Jan 24 08:01:03 crc kubenswrapper[4675]: I0124 08:01:03.928859 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} Jan 24 08:01:04 crc kubenswrapper[4675]: I0124 08:01:04.940040 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerID="d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8" exitCode=0 Jan 24 08:01:04 crc kubenswrapper[4675]: I0124 08:01:04.940112 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerDied","Data":"d301abf381fbe31993fa9dbe948cb5232c02e73f603e77a2063a184595eaa1c8"} Jan 24 08:01:05 crc kubenswrapper[4675]: I0124 08:01:05.952265 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerStarted","Data":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} Jan 24 08:01:05 crc kubenswrapper[4675]: I0124 08:01:05.979325 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2t77m" podStartSLOduration=4.540052843 podStartE2EDuration="6.979310872s" podCreationTimestamp="2026-01-24 08:00:59 +0000 UTC" firstStartedPulling="2026-01-24 08:01:01.910931012 +0000 UTC m=+4063.207036235" lastFinishedPulling="2026-01-24 08:01:04.350189041 +0000 UTC m=+4065.646294264" observedRunningTime="2026-01-24 08:01:05.975145281 +0000 UTC m=+4067.271250504" watchObservedRunningTime="2026-01-24 08:01:05.979310872 +0000 UTC m=+4067.275416095" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.321582 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.446901 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.446950 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.447024 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.447927 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") pod \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\" (UID: \"f5ea98d0-9373-4962-ba8a-a79643b7fdf3\") " Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.452369 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv" (OuterVolumeSpecName: "kube-api-access-l66qv") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "kube-api-access-l66qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.461025 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.490230 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.519871 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data" (OuterVolumeSpecName: "config-data") pod "f5ea98d0-9373-4962-ba8a-a79643b7fdf3" (UID: "f5ea98d0-9373-4962-ba8a-a79643b7fdf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549884 4675 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549916 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66qv\" (UniqueName: \"kubernetes.io/projected/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-kube-api-access-l66qv\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549925 4675 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.549933 4675 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5ea98d0-9373-4962-ba8a-a79643b7fdf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.961451 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29487361-fcfvr" event={"ID":"f5ea98d0-9373-4962-ba8a-a79643b7fdf3","Type":"ContainerDied","Data":"6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7"} Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.961489 4675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d79aae69180c50e6b9cf5f8fe19e16f1642505105fd7a2b01812388fb484bd7" Jan 24 08:01:06 crc kubenswrapper[4675]: I0124 08:01:06.962345 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29487361-fcfvr" Jan 24 08:01:08 crc kubenswrapper[4675]: I0124 08:01:08.634028 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:01:08 crc kubenswrapper[4675]: I0124 08:01:08.634388 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.120699 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.122943 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:10 crc kubenswrapper[4675]: I0124 08:01:10.178074 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:11 crc kubenswrapper[4675]: I0124 08:01:11.055374 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:11 crc kubenswrapper[4675]: I0124 08:01:11.110874 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.019304 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2t77m" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" containerID="cri-o://9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" gracePeriod=2 Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.492952 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597012 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597092 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.597141 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") pod \"7b602238-1aa0-4cd2-837f-5ee81d490342\" (UID: \"7b602238-1aa0-4cd2-837f-5ee81d490342\") " Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.605522 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities" (OuterVolumeSpecName: "utilities") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.625010 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd" (OuterVolumeSpecName: "kube-api-access-cjkvd") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "kube-api-access-cjkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.691709 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b602238-1aa0-4cd2-837f-5ee81d490342" (UID: "7b602238-1aa0-4cd2-837f-5ee81d490342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699455 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699851 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjkvd\" (UniqueName: \"kubernetes.io/projected/7b602238-1aa0-4cd2-837f-5ee81d490342-kube-api-access-cjkvd\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:13 crc kubenswrapper[4675]: I0124 08:01:13.699867 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b602238-1aa0-4cd2-837f-5ee81d490342-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028509 4675 generic.go:334] "Generic (PLEG): container finished" podID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" exitCode=0 Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028553 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028580 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t77m" event={"ID":"7b602238-1aa0-4cd2-837f-5ee81d490342","Type":"ContainerDied","Data":"2013a713f73b80d7b45183fa61f7e77296379195089f4466ec9588945f1c9cfc"} Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028597 4675 scope.go:117] "RemoveContainer" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.028605 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t77m" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.049169 4675 scope.go:117] "RemoveContainer" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.069369 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.081116 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2t77m"] Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.413928 4675 scope.go:117] "RemoveContainer" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.450502 4675 scope.go:117] "RemoveContainer" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.450980 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": container with ID starting with 9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753 not found: ID does not exist" containerID="9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451024 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753"} err="failed to get container status \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": rpc error: code = NotFound desc = could not find container \"9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753\": container with ID starting with 9e9e4ca6420b069be9ae804b6c9b81511b52fab9146d0487db7646ac2c77c753 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451049 4675 scope.go:117] "RemoveContainer" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.451543 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": container with ID starting with 74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1 not found: ID does not exist" containerID="74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451593 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1"} err="failed to get container status \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": rpc error: code = NotFound desc = could not find container \"74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1\": container with ID starting with 74aae38806cb7918eb5456e7e1eee4bdd382646e79b76d9f99ba247016d459b1 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.451628 4675 scope.go:117] "RemoveContainer" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: E0124 08:01:14.452021 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": container with ID starting with 936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801 not found: ID does not exist" containerID="936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.452050 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801"} err="failed to get container status \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": rpc error: code = NotFound desc = could not find container \"936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801\": container with ID starting with 936da34e1f346ad2438387cdaa7a9f7b5c77be2ae34b74c2904a02bcd6bfd801 not found: ID does not exist" Jan 24 08:01:14 crc kubenswrapper[4675]: I0124 08:01:14.951990 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" path="/var/lib/kubelet/pods/7b602238-1aa0-4cd2-837f-5ee81d490342/volumes" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.477235 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/controller/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.487252 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-c4k6t_af8e6625-69ed-4901-9577-65cc6fafe0d1/kube-rbac-proxy/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.705176 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.851798 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.899818 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.910158 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:15 crc kubenswrapper[4675]: I0124 08:01:15.952461 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.093583 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.101218 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.102569 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.129582 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.301510 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-frr-files/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.328356 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-reloader/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.355779 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/cp-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.416254 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/controller/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.630317 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.660495 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr-metrics/0.log" Jan 24 08:01:16 crc kubenswrapper[4675]: I0124 08:01:16.721022 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/kube-rbac-proxy-frr/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.015114 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/reloader/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.179344 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-skd24_032ac1eb-bb7f-4f94-b9ad-4d710032f3af/frr-k8s-webhook-server/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.437703 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-57d867674d-x4v6v_0cf0ee32-c416-4629-a441-268fbe054062/manager/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.629542 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-78f4w_fa6ce697-eaf1-4412-a7ca-40a3eb3fa712/frr/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.809551 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f499b46f-tntmc_893cbc8e-86ae-4910-8693-061301da0ba6/webhook-server/0.log" Jan 24 08:01:17 crc kubenswrapper[4675]: I0124 08:01:17.952643 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/kube-rbac-proxy/0.log" Jan 24 08:01:18 crc kubenswrapper[4675]: I0124 08:01:18.223347 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5bpc7_21ad12ca-5157-4c19-9e8c-34fbe8fa9b96/speaker/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.088794 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.279433 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.325734 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.366712 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.540430 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.581383 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/extract/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.590098 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2ggqc_55a17869-4316-441a-ba35-dc9c1660b966/pull/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.752250 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.935225 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:34 crc kubenswrapper[4675]: I0124 08:01:34.977512 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.022352 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.188601 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/util/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.281972 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/pull/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.284315 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nd6t7_6a14a2ad-1879-4684-b69a-64e6bebf6424/extract/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.426388 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.652191 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.679478 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.685794 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.901845 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-content/0.log" Jan 24 08:01:35 crc kubenswrapper[4675]: I0124 08:01:35.928384 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.420515 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bsdlx_c74192ba-e384-473f-8b1f-5acf16fcf6cb/registry-server/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.490969 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.619287 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.668835 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.689642 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.886797 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-content/0.log" Jan 24 08:01:36 crc kubenswrapper[4675]: I0124 08:01:36.991780 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.330945 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.383881 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9cx7r_83c80cb7-74c3-417a-8d8e-54cdcf640b5b/marketplace-operator/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.579475 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-25b5x_c82ba4e7-d34e-49ce-a0fa-628261617832/registry-server/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.623356 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.658175 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.751297 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:37 crc kubenswrapper[4675]: I0124 08:01:37.921216 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.018187 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.087735 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qrkr2_b4b49920-8f11-4ffb-84f0-930d921f722d/registry-server/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.213120 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.380648 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.410784 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.424245 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.594406 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-utilities/0.log" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.629540 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.629603 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:01:38 crc kubenswrapper[4675]: I0124 08:01:38.696640 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/extract-content/0.log" Jan 24 08:01:39 crc kubenswrapper[4675]: I0124 08:01:39.000569 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2zdff_96e2d7dc-bba1-4021-a095-98a4feb924da/registry-server/0.log" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.629914 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.630495 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.630559 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.631614 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:02:08 crc kubenswrapper[4675]: I0124 08:02:08.631710 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" gracePeriod=600 Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998098 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" exitCode=0 Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998189 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0"} Jan 24 08:02:09 crc kubenswrapper[4675]: I0124 08:02:09.998765 4675 scope.go:117] "RemoveContainer" containerID="4e1ce005b557b7603b391072dd1176db9331ce8261b612447a535070bf5b244f" Jan 24 08:02:11 crc kubenswrapper[4675]: I0124 08:02:11.008631 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerStarted","Data":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.115357 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117496 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-utilities" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117595 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-utilities" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117680 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117781 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.117865 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.117934 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: E0124 08:03:08.118031 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-content" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118101 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="extract-content" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118480 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b602238-1aa0-4cd2-837f-5ee81d490342" containerName="registry-server" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.118568 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ea98d0-9373-4962-ba8a-a79643b7fdf3" containerName="keystone-cron" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.120337 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.135817 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281427 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281483 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.281574 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.383105 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.383829 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384157 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384265 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.384598 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.403873 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"redhat-marketplace-nqr4z\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.440205 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:08 crc kubenswrapper[4675]: I0124 08:03:08.966418 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:09 crc kubenswrapper[4675]: W0124 08:03:09.096370 4675 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3824da95_d177_420a_b366_01067cecb438.slice/crio-db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db WatchSource:0}: Error finding container db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db: Status 404 returned error can't find the container with id db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.557846 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" exitCode=0 Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.557945 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78"} Jan 24 08:03:09 crc kubenswrapper[4675]: I0124 08:03:09.558252 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerStarted","Data":"db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db"} Jan 24 08:03:11 crc kubenswrapper[4675]: I0124 08:03:11.578680 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" exitCode=0 Jan 24 08:03:11 crc kubenswrapper[4675]: I0124 08:03:11.578756 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905"} Jan 24 08:03:12 crc kubenswrapper[4675]: I0124 08:03:12.593661 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerStarted","Data":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} Jan 24 08:03:12 crc kubenswrapper[4675]: I0124 08:03:12.614529 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nqr4z" podStartSLOduration=2.08217616 podStartE2EDuration="4.614514727s" podCreationTimestamp="2026-01-24 08:03:08 +0000 UTC" firstStartedPulling="2026-01-24 08:03:09.559694795 +0000 UTC m=+4190.855800018" lastFinishedPulling="2026-01-24 08:03:12.092033362 +0000 UTC m=+4193.388138585" observedRunningTime="2026-01-24 08:03:12.611551925 +0000 UTC m=+4193.907657148" watchObservedRunningTime="2026-01-24 08:03:12.614514727 +0000 UTC m=+4193.910619950" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.440608 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.442101 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:18 crc kubenswrapper[4675]: I0124 08:03:18.855614 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:19 crc kubenswrapper[4675]: I0124 08:03:19.755103 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:19 crc kubenswrapper[4675]: I0124 08:03:19.813288 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:21 crc kubenswrapper[4675]: I0124 08:03:21.699237 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nqr4z" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" containerID="cri-o://bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" gracePeriod=2 Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.190997 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263438 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263499 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.263813 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") pod \"3824da95-d177-420a-b366-01067cecb438\" (UID: \"3824da95-d177-420a-b366-01067cecb438\") " Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.264791 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities" (OuterVolumeSpecName: "utilities") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.269909 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn" (OuterVolumeSpecName: "kube-api-access-rwqtn") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "kube-api-access-rwqtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.285593 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3824da95-d177-420a-b366-01067cecb438" (UID: "3824da95-d177-420a-b366-01067cecb438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366028 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366091 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwqtn\" (UniqueName: \"kubernetes.io/projected/3824da95-d177-420a-b366-01067cecb438-kube-api-access-rwqtn\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.366105 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3824da95-d177-420a-b366-01067cecb438-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717556 4675 generic.go:334] "Generic (PLEG): container finished" podID="3824da95-d177-420a-b366-01067cecb438" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" exitCode=0 Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717615 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nqr4z" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717624 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717649 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nqr4z" event={"ID":"3824da95-d177-420a-b366-01067cecb438","Type":"ContainerDied","Data":"db6f3583c004e241711716c217c79b1b3d53056eaf2ec5668569d89eefc738db"} Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.717679 4675 scope.go:117] "RemoveContainer" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.753086 4675 scope.go:117] "RemoveContainer" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.762084 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.772646 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nqr4z"] Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.795182 4675 scope.go:117] "RemoveContainer" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817366 4675 scope.go:117] "RemoveContainer" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.817762 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": container with ID starting with bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130 not found: ID does not exist" containerID="bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817818 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130"} err="failed to get container status \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": rpc error: code = NotFound desc = could not find container \"bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130\": container with ID starting with bfccf48d77fa5a6118ff806203ba10fcc6ada93e441e23183e42a1ee55eac130 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.817850 4675 scope.go:117] "RemoveContainer" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.818210 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": container with ID starting with 2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905 not found: ID does not exist" containerID="2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818246 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905"} err="failed to get container status \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": rpc error: code = NotFound desc = could not find container \"2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905\": container with ID starting with 2a509a2f2705e05d58a16f0c1c465d76b4ba2a5d46dc28b7fec7e4338ba12905 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818271 4675 scope.go:117] "RemoveContainer" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: E0124 08:03:22.818616 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": container with ID starting with 9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78 not found: ID does not exist" containerID="9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.818645 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78"} err="failed to get container status \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": rpc error: code = NotFound desc = could not find container \"9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78\": container with ID starting with 9826a477059175492355cea4096957f9ff64ec980f7e31be4ffcf2499c778d78 not found: ID does not exist" Jan 24 08:03:22 crc kubenswrapper[4675]: I0124 08:03:22.973525 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3824da95-d177-420a-b366-01067cecb438" path="/var/lib/kubelet/pods/3824da95-d177-420a-b366-01067cecb438/volumes" Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.906952 4675 generic.go:334] "Generic (PLEG): container finished" podID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" exitCode=0 Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.907077 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-958tl/must-gather-64vd7" event={"ID":"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5","Type":"ContainerDied","Data":"311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526"} Jan 24 08:03:41 crc kubenswrapper[4675]: I0124 08:03:41.908689 4675 scope.go:117] "RemoveContainer" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" Jan 24 08:03:42 crc kubenswrapper[4675]: I0124 08:03:42.432768 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/gather/0.log" Jan 24 08:03:42 crc kubenswrapper[4675]: I0124 08:03:42.650234 4675 scope.go:117] "RemoveContainer" containerID="24b7020c305a4ecc19f86c8c4874d01aef9f91367091de56d516f83c37e8dff9" Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.657439 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.658322 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-958tl/must-gather-64vd7" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" containerID="cri-o://c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" gracePeriod=2 Jan 24 08:03:54 crc kubenswrapper[4675]: I0124 08:03:54.670435 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-958tl/must-gather-64vd7"] Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.032721 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.033303 4675 generic.go:334] "Generic (PLEG): container finished" podID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerID="c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" exitCode=143 Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.609075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.609756 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.722895 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") pod \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.722977 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") pod \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\" (UID: \"e6b41fa9-a3d8-403c-8aa7-8da5af8796b5\") " Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.737931 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj" (OuterVolumeSpecName: "kube-api-access-zghsj") pod "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" (UID: "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5"). InnerVolumeSpecName "kube-api-access-zghsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.825884 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghsj\" (UniqueName: \"kubernetes.io/projected/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-kube-api-access-zghsj\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.877729 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" (UID: "e6b41fa9-a3d8-403c-8aa7-8da5af8796b5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:03:55 crc kubenswrapper[4675]: I0124 08:03:55.928838 4675 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045075 4675 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-958tl_must-gather-64vd7_e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/copy/0.log" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045652 4675 scope.go:117] "RemoveContainer" containerID="c39dbccf2355a3e98d8ea1c8895d61f5a0774063cc91871a2a28f48106ca8401" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.045736 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-958tl/must-gather-64vd7" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.079111 4675 scope.go:117] "RemoveContainer" containerID="311c1a82bd98e8c72527e16eeaa3da0e561f2709ffd2315fbe82f41ebd0fd526" Jan 24 08:03:56 crc kubenswrapper[4675]: I0124 08:03:56.963176 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" path="/var/lib/kubelet/pods/e6b41fa9-a3d8-403c-8aa7-8da5af8796b5/volumes" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.636454 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637449 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637463 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637502 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637510 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637530 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-content" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637538 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-content" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637561 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-utilities" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637569 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="extract-utilities" Jan 24 08:04:07 crc kubenswrapper[4675]: E0124 08:04:07.637582 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637591 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637840 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="copy" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637857 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b41fa9-a3d8-403c-8aa7-8da5af8796b5" containerName="gather" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.637880 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="3824da95-d177-420a-b366-01067cecb438" containerName="registry-server" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.639483 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.649022 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812275 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812321 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.812408 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.913840 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.913884 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.914011 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.914911 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.915179 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.944515 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"redhat-operators-sdxbg\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:07 crc kubenswrapper[4675]: I0124 08:04:07.962069 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:08 crc kubenswrapper[4675]: I0124 08:04:08.448921 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.168923 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" exitCode=0 Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.168975 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474"} Jan 24 08:04:09 crc kubenswrapper[4675]: I0124 08:04:09.169171 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"cd9457c3590ed26f1c22e0f13467d2d764d10baee5be2a45e9a9de71dbebc229"} Jan 24 08:04:11 crc kubenswrapper[4675]: I0124 08:04:11.191368 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} Jan 24 08:04:14 crc kubenswrapper[4675]: I0124 08:04:14.216093 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" exitCode=0 Jan 24 08:04:14 crc kubenswrapper[4675]: I0124 08:04:14.216158 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} Jan 24 08:04:15 crc kubenswrapper[4675]: I0124 08:04:15.228186 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerStarted","Data":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} Jan 24 08:04:15 crc kubenswrapper[4675]: I0124 08:04:15.255648 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdxbg" podStartSLOduration=2.759804538 podStartE2EDuration="8.255629801s" podCreationTimestamp="2026-01-24 08:04:07 +0000 UTC" firstStartedPulling="2026-01-24 08:04:09.170659136 +0000 UTC m=+4250.466764379" lastFinishedPulling="2026-01-24 08:04:14.666484409 +0000 UTC m=+4255.962589642" observedRunningTime="2026-01-24 08:04:15.250531398 +0000 UTC m=+4256.546636621" watchObservedRunningTime="2026-01-24 08:04:15.255629801 +0000 UTC m=+4256.551735024" Jan 24 08:04:17 crc kubenswrapper[4675]: I0124 08:04:17.963155 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:17 crc kubenswrapper[4675]: I0124 08:04:17.963493 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:19 crc kubenswrapper[4675]: I0124 08:04:19.024359 4675 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sdxbg" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" probeResult="failure" output=< Jan 24 08:04:19 crc kubenswrapper[4675]: timeout: failed to connect service ":50051" within 1s Jan 24 08:04:19 crc kubenswrapper[4675]: > Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.031706 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.106440 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:28 crc kubenswrapper[4675]: I0124 08:04:28.279115 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.360065 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdxbg" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" containerID="cri-o://9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" gracePeriod=2 Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.867865 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.982951 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.983232 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.983560 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") pod \"33cfc116-7294-4c74-89f4-e4f3417da631\" (UID: \"33cfc116-7294-4c74-89f4-e4f3417da631\") " Jan 24 08:04:29 crc kubenswrapper[4675]: I0124 08:04:29.984116 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities" (OuterVolumeSpecName: "utilities") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.015950 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl" (OuterVolumeSpecName: "kube-api-access-qhfnl") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "kube-api-access-qhfnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.086419 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfnl\" (UniqueName: \"kubernetes.io/projected/33cfc116-7294-4c74-89f4-e4f3417da631-kube-api-access-qhfnl\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.086457 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.110207 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cfc116-7294-4c74-89f4-e4f3417da631" (UID: "33cfc116-7294-4c74-89f4-e4f3417da631"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.188227 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cfc116-7294-4c74-89f4-e4f3417da631-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370051 4675 generic.go:334] "Generic (PLEG): container finished" podID="33cfc116-7294-4c74-89f4-e4f3417da631" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" exitCode=0 Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370115 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdxbg" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.370135 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.371830 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdxbg" event={"ID":"33cfc116-7294-4c74-89f4-e4f3417da631","Type":"ContainerDied","Data":"cd9457c3590ed26f1c22e0f13467d2d764d10baee5be2a45e9a9de71dbebc229"} Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.371867 4675 scope.go:117] "RemoveContainer" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.394456 4675 scope.go:117] "RemoveContainer" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.429444 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.432064 4675 scope.go:117] "RemoveContainer" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.440342 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdxbg"] Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.459943 4675 scope.go:117] "RemoveContainer" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.460431 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": container with ID starting with 9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d not found: ID does not exist" containerID="9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460477 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d"} err="failed to get container status \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": rpc error: code = NotFound desc = could not find container \"9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d\": container with ID starting with 9e00ab4f1224de9470f1adf24f9e1914b18a0d82f7350e0034278c6b698cad5d not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460503 4675 scope.go:117] "RemoveContainer" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.460911 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": container with ID starting with 7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8 not found: ID does not exist" containerID="7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460951 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8"} err="failed to get container status \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": rpc error: code = NotFound desc = could not find container \"7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8\": container with ID starting with 7bd050c9b9829a9a3cd0a0733599d68691b72e8beaba6d40360fbbe77ce5d8a8 not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.460980 4675 scope.go:117] "RemoveContainer" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: E0124 08:04:30.461211 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": container with ID starting with aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474 not found: ID does not exist" containerID="aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.461246 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474"} err="failed to get container status \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": rpc error: code = NotFound desc = could not find container \"aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474\": container with ID starting with aa7d4b55fe10e39d24a58a56ccdb7a9c55b5d820ca7fa566f066db0f4e410474 not found: ID does not exist" Jan 24 08:04:30 crc kubenswrapper[4675]: I0124 08:04:30.956354 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" path="/var/lib/kubelet/pods/33cfc116-7294-4c74-89f4-e4f3417da631/volumes" Jan 24 08:04:38 crc kubenswrapper[4675]: I0124 08:04:38.630072 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:04:38 crc kubenswrapper[4675]: I0124 08:04:38.630965 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:04:42 crc kubenswrapper[4675]: I0124 08:04:42.751146 4675 scope.go:117] "RemoveContainer" containerID="f73acc78d209e642db0475021e83c828f28a3e3fcb9f35022e1d491b3eba45ef" Jan 24 08:05:08 crc kubenswrapper[4675]: I0124 08:05:08.630141 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:05:08 crc kubenswrapper[4675]: I0124 08:05:08.630587 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:05:17 crc kubenswrapper[4675]: I0124 08:05:17.787483 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="009254f3-9d76-4d89-8e35-d2b4c4be0da8" containerName="galera" probeResult="failure" output="command timed out" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219015 4675 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219912 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-utilities" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219925 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-utilities" Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219940 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219946 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: E0124 08:05:25.219963 4675 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-content" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.219969 4675 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="extract-content" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.220174 4675 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cfc116-7294-4c74-89f4-e4f3417da631" containerName="registry-server" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.221794 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.235621 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.361573 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:25 crc kubenswrapper[4675]: I0124 08:05:25.361652 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:25.361675 4675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120710 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120812 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.120853 4675 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.121416 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.121439 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.138747 4675 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"certified-operators-rvq9q\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.236413 4675 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:26 crc kubenswrapper[4675]: I0124 08:05:26.994384 4675 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:28 crc kubenswrapper[4675]: I0124 08:05:28.257918 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerStarted","Data":"66b1f8af893ade8bcafc88c7de20f7e2d80e545ccaa6b0d7be75135aa388898b"} Jan 24 08:05:30 crc kubenswrapper[4675]: I0124 08:05:30.275950 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" exitCode=0 Jan 24 08:05:30 crc kubenswrapper[4675]: I0124 08:05:30.276037 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754"} Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.349627 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" exitCode=0 Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.349693 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3"} Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630073 4675 patch_prober.go:28] interesting pod/machine-config-daemon-nqn5c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630137 4675 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.630180 4675 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.631014 4675 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 24 08:05:38 crc kubenswrapper[4675]: I0124 08:05:38.631095 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerName="machine-config-daemon" containerID="cri-o://2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" gracePeriod=600 Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361089 4675 generic.go:334] "Generic (PLEG): container finished" podID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" exitCode=0 Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361131 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" event={"ID":"94e792a6-d8c0-45f7-b7b0-08616d1a9dd5","Type":"ContainerDied","Data":"2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf"} Jan 24 08:05:39 crc kubenswrapper[4675]: I0124 08:05:39.361403 4675 scope.go:117] "RemoveContainer" containerID="be0174938bf27d2086139a9eb48453c049038be5f8027d938c36c6164eb025e0" Jan 24 08:05:40 crc kubenswrapper[4675]: E0124 08:05:40.384870 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:05:41 crc kubenswrapper[4675]: I0124 08:05:41.383694 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerStarted","Data":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} Jan 24 08:05:41 crc kubenswrapper[4675]: I0124 08:05:41.384213 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:05:41 crc kubenswrapper[4675]: E0124 08:05:41.384441 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:05:42 crc kubenswrapper[4675]: I0124 08:05:42.418326 4675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvq9q" podStartSLOduration=8.144580137 podStartE2EDuration="17.418309277s" podCreationTimestamp="2026-01-24 08:05:25 +0000 UTC" firstStartedPulling="2026-01-24 08:05:31.285639867 +0000 UTC m=+4332.581745090" lastFinishedPulling="2026-01-24 08:05:40.559369007 +0000 UTC m=+4341.855474230" observedRunningTime="2026-01-24 08:05:42.408987751 +0000 UTC m=+4343.705092974" watchObservedRunningTime="2026-01-24 08:05:42.418309277 +0000 UTC m=+4343.714414500" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.237322 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.239822 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.307405 4675 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.511303 4675 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:46 crc kubenswrapper[4675]: I0124 08:05:46.578186 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:48 crc kubenswrapper[4675]: I0124 08:05:48.450510 4675 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvq9q" podUID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerName="registry-server" containerID="cri-o://2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" gracePeriod=2 Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.430153 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467045 4675 generic.go:334] "Generic (PLEG): container finished" podID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" exitCode=0 Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467083 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467103 4675 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq9q" event={"ID":"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad","Type":"ContainerDied","Data":"66b1f8af893ade8bcafc88c7de20f7e2d80e545ccaa6b0d7be75135aa388898b"} Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467119 4675 scope.go:117] "RemoveContainer" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.467215 4675 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq9q" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.492835 4675 scope.go:117] "RemoveContainer" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.526170 4675 scope.go:117] "RemoveContainer" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534362 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534410 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.534687 4675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") pod \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\" (UID: \"f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad\") " Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.536613 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities" (OuterVolumeSpecName: "utilities") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.538984 4675 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-utilities\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.543054 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx" (OuterVolumeSpecName: "kube-api-access-n5vfx") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "kube-api-access-n5vfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598264 4675 scope.go:117] "RemoveContainer" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.598746 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": container with ID starting with 2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4 not found: ID does not exist" containerID="2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598810 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4"} err="failed to get container status \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": rpc error: code = NotFound desc = could not find container \"2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4\": container with ID starting with 2009f1f8b996b505cca9c47b4c21295f95dcf48f6b561264d5c2580020a4afd4 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.598832 4675 scope.go:117] "RemoveContainer" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.599243 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": container with ID starting with f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3 not found: ID does not exist" containerID="f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599268 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3"} err="failed to get container status \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": rpc error: code = NotFound desc = could not find container \"f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3\": container with ID starting with f3ba5d1148e81c792de018aadb105df56c7ad0d838af84f48046fc80df1c90c3 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599285 4675 scope.go:117] "RemoveContainer" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: E0124 08:05:49.599555 4675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": container with ID starting with 76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754 not found: ID does not exist" containerID="76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.599590 4675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754"} err="failed to get container status \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": rpc error: code = NotFound desc = could not find container \"76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754\": container with ID starting with 76097f6ce5d9df75f1d89d47ea655552c7bf57ea447025fefb2b65e483861754 not found: ID does not exist" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.601783 4675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" (UID: "f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.641025 4675 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.641055 4675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5vfx\" (UniqueName: \"kubernetes.io/projected/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad-kube-api-access-n5vfx\") on node \"crc\" DevicePath \"\"" Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.822901 4675 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:49 crc kubenswrapper[4675]: I0124 08:05:49.834244 4675 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvq9q"] Jan 24 08:05:50 crc kubenswrapper[4675]: I0124 08:05:50.952741 4675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad" path="/var/lib/kubelet/pods/f5dcb73d-0cb0-4a5e-93f8-7c6a12fd4bad/volumes" Jan 24 08:05:55 crc kubenswrapper[4675]: I0124 08:05:55.942607 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:05:55 crc kubenswrapper[4675]: E0124 08:05:55.943661 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:07 crc kubenswrapper[4675]: I0124 08:06:07.943020 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:07 crc kubenswrapper[4675]: E0124 08:06:07.943998 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:21 crc kubenswrapper[4675]: I0124 08:06:21.942533 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:21 crc kubenswrapper[4675]: E0124 08:06:21.943355 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:33 crc kubenswrapper[4675]: I0124 08:06:33.943151 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:33 crc kubenswrapper[4675]: E0124 08:06:33.944180 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:06:47 crc kubenswrapper[4675]: I0124 08:06:47.943366 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:06:47 crc kubenswrapper[4675]: E0124 08:06:47.944302 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:07:02 crc kubenswrapper[4675]: I0124 08:07:02.943321 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:07:02 crc kubenswrapper[4675]: E0124 08:07:02.944173 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:07:15 crc kubenswrapper[4675]: I0124 08:07:15.942658 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:07:15 crc kubenswrapper[4675]: E0124 08:07:15.943438 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" Jan 24 08:07:30 crc kubenswrapper[4675]: I0124 08:07:30.943592 4675 scope.go:117] "RemoveContainer" containerID="2b79de1f31caae6d65c65450f61f1ab670d61be9974613d59315c8b1e9251abf" Jan 24 08:07:30 crc kubenswrapper[4675]: E0124 08:07:30.944550 4675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nqn5c_openshift-machine-config-operator(94e792a6-d8c0-45f7-b7b0-08616d1a9dd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nqn5c" podUID="94e792a6-d8c0-45f7-b7b0-08616d1a9dd5" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135076721024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135076721017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135065550016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135065550015462 5ustar corecore